Category: Uncategorized

  • Does a 25GbE NAS That Isn’t a DIY Project Even Exist?

    Does a 25GbE NAS That Isn’t a DIY Project Even Exist?

    Looking for a pre-built NAS or mini PC with 25GbE? Explore the best options for high-speed external storage for your VMware homelab using NFS or iSCSI.

    So, you’ve got a powerful homelab setup. Maybe a couple of ESXi hosts humming away, running all sorts of virtual machines. You’ve even upgraded your networking to screaming-fast 25GbE. But there’s a bottleneck, isn’t there? Your VM storage.

    You’re probably thinking about how to get all those VMs off the local drives and onto a shared storage solution that can keep up with your 25GbE speeds. The goal is simple: a centralized, fast repository for your virtual machines, probably using NFS or iSCSI.

    The first thought is often, “I’ll just build something.” A custom ITX build sounds fun, right? You get to pick every component, from the CPU to the exact 25GbE network card. But let’s be honest, that’s a project. It takes time, research, and dealing with the potential gremlins of component compatibility.

    What if you just want something that works? Are there any off-the-shelf options that give you that 25GbE goodness without the headache of a full custom build?

    The Pre-Built NAS Route

    When you think of network storage, you probably think of Synology or QNAP. These companies have built their reputations on creating user-friendly NAS appliances. For a long time, their focus was on the 1GbE and 10GbE world. But as 25GbE becomes more accessible for prosumers and small businesses, they’ve started to step up.

    • QNAP: These guys are often ahead of the curve when it comes to high-speed networking. They offer a range of tower and rackmount NAS units that either come with 25GbE ports built-in or have PCIe slots where you can easily add one of their own 25GbE network cards. Models in their TVS-h series, for example, are often a good place to start looking. They’re powerful enough to handle the demands of a virtualized environment and the ZFS-based QuTS hero operating system is solid for data integrity.

    • Synology: Synology has also been adding faster networking options to their higher-end models. You’ll typically be looking at their XS, SA, or UC series for native 25GbE or expansion capabilities. Their DiskStation Manager (DSM) software is famously easy to use, which is a huge plus if you want to set it and forget it. The trade-off is that they can sometimes be a bit pricier for the same level of hardware performance compared to QNAP or a custom build.

    The biggest advantage of going with a pre-built NAS is the ecosystem. The software is mature, the hardware is validated, and there’s a huge community and support system if you run into trouble. Setting up an iSCSI LUN or an NFS share for your ESXi hosts is usually just a few clicks in a friendly web interface.

    What About Mini PCs?

    The world of mini PCs is exploding. Brands like Minisforum, Beelink, and others are packing serious power into tiny boxes. The problem? Finding one with a 25GbE port is like finding a needle in a haystack.

    Most of these mini PCs top out at 2.5GbE or maybe 10GbE. The challenge is both physical space and cooling. A 25GbE NIC can run hot and requires a PCIe slot, which most ultra-compact mini PCs just don’t have.

    You might find some niche or industrial mini PCs that offer this, but they are often expensive and may not be optimized for use as a storage server. For now, if your heart is set on a small form factor, you’re likely better off looking at a custom build where you can control the chassis and component selection.

    The Verdict: Build vs. Buy

    So, where does that leave us?

    If your main goal is to get a reliable, high-speed storage solution up and running for your VMware lab with minimal fuss, a pre-built NAS from a reputable brand like QNAP or Synology is your best bet.

    1. Check the Specs: Look for models with at least a quad-core CPU (an Intel Xeon or AMD Ryzen is ideal) and make sure you can install enough RAM (16GB is a good starting point for virtualization).
    2. Verify the Slot: If it doesn’t have 25GbE built-in, confirm it has a free PCIe 3.0 x8 or x16 slot to accommodate a proper 25GbE card.
    3. Read the Reviews: See what other homelabbers or small business users are saying about using that specific model for iSCSI or NFS performance.

    Building a custom server is still a fantastic option if you want maximum performance for your dollar and enjoy the process of tinkering. But for a simple, stable, and performant external storage solution that just works? The appliance route has never been more appealing. You can unbox it, plug it in, and spend your valuable time managing your VMs, not wrestling with hardware.

  • My Synology NAS Died. Here’s How I Built a Free Replacement.

    My Synology NAS Died. Here’s How I Built a Free Replacement.

    My Synology NAS failed and I couldn’t afford a new one. Learn how I used an old PC with Proxmox to build a free, powerful replacement for my data.

    It started with a quiet, unsettling hum. Then, nothing. My Synology NAS, the little black box that dutifully held my digital life, was dead.

    I’m not going to lie, my first feeling was panic. The Synology wasn’t just a network drive; it was the central hub for my most important data. It hosted my password manager database and served as a private backup for all my family photos, running 24/7 in a closet.

    After the panic came the practical dread. I looked up the price of a new one, and my wallet immediately started sweating. A replacement was not in the budget. Not even close.

    So, there I was. A dead NAS, two hard drives full of data I couldn’t easily access, and no money for a shiny new box. It felt like a dead end. But then I remembered something.

    The Forgotten Hero in the Corner

    For a while now, I’d been tinkering with an old Dell OptiPlex computer I got for next to nothing. I had installed Proxmox on it—a free, open-source tool that turns a computer into a powerful virtualization server. It was quietly running a couple of small things, but it was mostly bored, waiting for a real job.

    Could this old office PC become my new NAS?

    The idea was exciting. Instead of a closed, proprietary box, I could build something custom. Something I controlled completely. And best of all, the cost would be zero. I already had the hardware.

    What is Proxmox, Anyway?

    Think of Proxmox as a manager for computers-within-a-computer. It lets you run multiple, isolated operating systems on a single physical machine. You can run full virtual machines (VMs), which are like entire separate computers, or something called LXC containers.

    LXC containers are incredibly lightweight. They share the host computer’s core (its kernel), so they use way fewer resources than a full VM. For something like a NAS, a container is perfect. It’s efficient, fast, and doesn’t waste power.

    My plan was simple: Set up a new LXC container on my Proxmox server and install a NAS operating system inside it.

    Finding a Synology-Like Experience for Free

    The best part of a Synology is its operating system, DSM. It’s clean, easy to use, and has a great app store. I needed something that came close to that experience without the price tag.

    After a bit of research, I landed on a few popular choices:

    • TrueNAS SCALE: It’s incredibly powerful and popular, but some people find it a bit complex for a first-time setup.
    • OpenMediaVault (OMV): This one hit the sweet spot. It’s known for being simple, stable, and easy to manage through a clean web interface. It felt like the perfect fit for someone who just wants things to work.

    I decided to go with OpenMediaVault. Setting it up in a Proxmox container was surprisingly straightforward. There are tons of great guides online that walk you through the process step-by-step. The key is “passthrough,” where you give the OMV container direct control over the hard drives.

    What About the Data on My Old Drives?

    This was the scariest part. My two hard drives were pulled from the dead Synology. How could I get my files off them? Synology uses a standard Linux-based file system, which is good news. It means your data isn’t trapped in some weird proprietary format.

    The solution was a USB-to-SATA adapter. I plugged one of the hard drives into my main computer (which runs Linux, but you can use a bootable Ubuntu USB stick on any PC). With a few simple commands in the terminal, I was able to mount the drive and see all my files. Everything was there. The relief was immense. I carefully copied everything over to a temporary drive before installing them into the Proxmox machine.

    The New, Improved Setup

    With my data safe and OpenMediaVault running, I started rebuilding my little data hub.

    First, I set up network shares so all the computers in my house could access the storage. Easy.

    Next, I tackled my photo backup. I’d been hearing a lot about Immich, which is basically a self-hosted Google Photos alternative. It automatically backs up photos from your phone, organizes them by date, and even has AI-powered search. I installed it using Docker (which OMV supports beautifully), and it’s been fantastic.

    Finally, I needed my password manager. The Synology was running my Bitwarden database. The open-source community has an amazing, lightweight alternative called Vaultwarden. It’s fully compatible with Bitwarden apps but uses far fewer resources. I got it running in another Docker container in minutes.

    It Started with a Failure, But Ended with a Win

    Losing my NAS was a headache I didn’t need. But it forced me to find a better way. My new “NAS” is an old computer that was gathering dust, but it’s more powerful and flexible than the expensive box it replaced.

    I’m not locked into one company’s ecosystem anymore. I can run whatever services I want. And because it’s all running on Proxmox, I can easily back up my entire NAS setup, experiment with new VMs, and have far more control.

    If you’re in a similar boat, or just want to take control of your own data without spending a fortune, don’t overlook the old hardware you might have lying around. A little bit of tinkering can save you a lot of money and leave you with something that’s truly your own.

  • Meet Nomad: The Offline Media Server on a USB Stick

    Meet Nomad: The Offline Media Server on a USB Stick

    Discover Nomad, a tiny, self-hosted media server on a USB stick. Stream movies, music, and books offline, directly to your devices. Perfect for homelab fans.

    I stumbled across a fascinating open-source project the other day, and it’s one of those ideas that’s so simple and smart, you can’t help but love it.

    Imagine having a personal, pocket-sized media server. Not just a storage drive, but a full-fledged server that can stream movies, music, and even books to your phone or laptop, all without needing an internet connection.

    That’s the whole idea behind a project called Nomad. It’s a tiny, self-contained media server built into a USB stick. It’s designed for anyone who loves tinkering with technology, especially folks in the homelab community, but its appeal is much broader. Think of it as your own private, offline Netflix and Spotify, ready to go wherever you are.

    So, How Does This Tiny Server Work?

    At its heart, Nomad is an ESP32-S3 board—a tiny, low-cost computer—housed in a USB drive form factor. You load up an SD card with your favorite media files—videos, songs, podcasts, ebooks—and plug it into the Nomad.

    When you power it on (by plugging it into any USB port), it works its magic. It automatically creates its own Wi-Fi network with a captive portal, just like the Wi-Fi at a hotel or coffee shop. You just connect your phone, tablet, or laptop to the Nomad’s Wi-Fi, and instantly, you have access to a simple web interface where all your files are neatly organized and ready to stream.

    No internet? No problem. No cloud subscriptions? Not needed. It’s a completely self-hosted, offline-first solution. It’s perfect for road trips, flights, or just using around the house without relying on your main network.

    It’s Getting Some Serious Upgrades

    The project is constantly evolving, and the creator has been working on an experimental version with some really practical new features. This is where Nomad goes from a cool novelty to a genuinely useful tool.

    Here are some of the highlights:

    • Better File Management: You can now manage your media files remotely through a web browser. That means you can upload, delete, or rename files over Wi-Fi. It also has a clever hardware button that switches it into a standard USB drive mode, so you can drag and drop files directly from your computer.
    • Plays Nice with Your Favorite Apps: Thanks to DLNA support, you can now stream your media directly to popular apps like VLC and Kodi, or even to compatible smart TVs. It generates a simple playlist URL, making it super easy to integrate with the media center software you already use.
    • A Book Lover’s Dream: For all the readers out there, the addition of OPDS support is fantastic. It allows e-book reader apps to connect directly to Nomad. You can browse your library and even track your reading progress right from your favorite reading app.
    • HD Video Streaming: While it wasn’t originally designed for it, the system can now handle streaming a 1080p HD movie, provided the file is well-encoded and you’re using a decent SD card. It’s pushing the limits of the hardware, but it’s impressive that it works.

    The Future of Nomad Looks Even Brighter

    The developer isn’t stopping there. A more powerful version, called Nomad Studio, is already in the works. This sounds like it will address some of the current hardware limitations and add some serious power.

    The plan for the “Studio” version includes dual-band Wi-Fi (including 5 GHz for much faster speeds), support for 4K video, and full auto-discovery on your network. It will also feature a “Home-Server Mode,” allowing it to seamlessly integrate into your existing home network alongside your other devices.

    From DIY Project to a Real Product?

    What started as a personal project has gained so much interest that the creator is considering producing a small run of pre-assembled units. The project will always remain open-source for the DIY community, but offering a plug-and-play version could help fund its development and make it accessible to more people.

    It’s a great example of a passion project growing into something more. It’s not trying to replace your massive, multi-terabyte home server. Instead, it’s carving out a unique niche for itself: a simple, reliable, and incredibly portable way to carry your digital world in your pocket. In an age of constant connectivity, there’s something wonderful about a device that proudly works offline.

  • What RAM Actually Works with the Aoostar WTR MAX?

    What RAM Actually Works with the Aoostar WTR MAX?

    Struggling to find compatible RAM for your Aoostar WTR MAX? This guide explains the specs and limitations to help you choose the right memory.

    So, you got your hands on the Aoostar WTR MAX. It’s a powerful little machine, but if you bought the barebones version, you might be scratching your head about one crucial component: the RAM.

    Finding the right memory for this mini-PC feels a bit like a treasure hunt, doesn’t it? The official information from Aoostar is… well, a little sparse. It leaves you wondering what memory sticks actually work.

    I found myself in the same boat. You know there are two RAM slots, and you know it can handle up to a whopping 128GB of memory. But the specifics? That’s where things get fuzzy.

    This post is my attempt to clear up the confusion. Think of it as a shared notebook for everyone who owns, or is thinking about buying, a WTR MAX.

    What We Know So Far

    Let’s start with the official details, however limited they may be. The Aoostar WTR MAX has two DDR5 SODIMM slots. It supports both ECC and non-ECC RAM.

    Here’s the important part:

    • Max Capacity: 128GB total.
    • ECC Support: Yes, it supports full ECC RAM. This is great for anyone building a serious home server who needs that extra layer of data integrity.
    • The Big Quirk: This is the main thing to watch out for. The manufacturer states it does not support two 48GB non-ECC sticks (for a total of 96GB). However, it does support two 48GB ECC sticks.

    It’s a strange limitation, right? It seems to be a key piece of the puzzle. It also highlights the difference between “On-die ECC,” which most DDR5 has and isn’t considered “true” ECC, and the full ECC UDIMM sticks you’d need for that 96GB setup.

    Let’s Solve This Together

    Since the official manual won’t give us a clear list of tested and compatible RAM modules, the next best thing is to create one ourselves. It’s frustrating to buy expensive hardware only to find out it doesn’t work.

    The goal here is simple: to build a community-driven list of what works and what doesn’t. If you have a WTR MAX up and running, you can help. By sharing the specific RAM you’re using, you could save someone else a major headache (and a restocking fee).

    Here’s what would be helpful to know:

    • Brand and Model: What’s the exact model number of your RAM? (e.g., Crucial CT2K16G56C46S5)
    • Capacity: What is the size of each stick and the total capacity? (e.g., 2 x 16GB for 32GB total)
    • Type: Is it ECC or non-ECC?
    • Did it Work? A simple yes or no.

    Sharing this info helps current and future owners make informed decisions. It turns a frustrating solo problem into a much easier group project. When official documentation fails us, the community is the best resource we have.

    So, if you’ve already found the perfect RAM for your WTR MAX, don’t keep it a secret! Drop a comment and let us know what you’re running. Let’s make this little machine as easy to set up as possible.

  • Feeling Lost in Your Files? Let’s Talk About Mounting and Access

    Feeling Lost in Your Files? Let’s Talk About Mounting and Access

    Confused by terms like ‘mounting’ and ‘file access’? This friendly guide breaks down these core computer concepts in a simple, easy-to-understand way.

    Ever felt like your computer is speaking a different language? You hear words like “mounting a drive” or “file access denied” and you just nod along, hoping no one asks you to explain. I’ve been there. It can feel a bit like everyone else got a secret manual for their computers, and you were out sick that day.

    But here’s the thing: these concepts aren’t as complicated as they sound. So, grab a coffee, get comfy, and let’s untangle this together.

    First off, why should you even care?

    Think about your digital life. You’ve got photos, documents, music, movies… all your important stuff. Knowing a little about how your computer organizes and controls access to these files is like knowing how the keys to your house work. It gives you control, keeps your stuff safe, and helps you get to what you need, when you need it.

    Good file management isn’t just for tech wizards. It’s a basic skill that can make your digital life so much easier.

    So, what’s this “mounting” thing all about?

    Okay, let’s start with “mounting.” It sounds so dramatic, doesn’t it? Like you’re preparing for a medieval battle.

    But really, mounting is a super simple idea.

    Think of it like this:

    You have a USB drive. When you plug it into your computer, a new icon pops up, and you can suddenly see all the files on it. That’s mounting! Your computer has “mounted” the USB drive, making its contents available to you. When you “eject” the drive safely, you’re “unmounting” it.

    So, mounting is simply the process of making a storage device (like a hard drive, a USB stick, or even a remote server) accessible to your computer’s operating system.

    It’s not just for physical things you plug in. Here are a few other times you might “mount” something:

    • Network Drives: At work, you might have a shared drive where your team keeps all its projects. Accessing that drive on your computer? You’re mounting it.
    • Cloud Storage: If you use a service like Dropbox or Google Drive through their desktop apps, they often work by mounting a virtual drive on your computer.
    • Disk Images: Sometimes you download a program and it comes as a .dmg file (on a Mac) or an .iso file (on a PC). When you open it, it acts like a temporary drive. You’ve just mounted a disk image.

    See? Not so scary. Mounting is just telling your computer, “Hey, I want to use the files on this thing right here.”

    And what about “access”?

    Now let’s talk about “access.” This one is a bit more straightforward. It’s all about permissions. Who gets to do what with a file or folder?

    Imagine you live in a shared house.

    • Your Room: You have full access. You can go in, rearrange the furniture, and even throw things out. That’s “read and write” access.
    • The Living Room: Everyone in the house can use it, sit on the couch, and watch TV. That’s a shared space with “read and write” access for all housemates.
    • Your Housemate’s Room: You probably can’t just walk in and start redecorating. You might not even have a key. That’s “no access” or “read-only access” (you can see the room from the hallway, but you can’t go in and change things).

    File access on your computer works in a very similar way. For any file or folder, you can set permissions:

    • Read: You can open and view the file, but you can’t change it.
    • Write: You can change, edit, or delete the file.
    • Execute: This is mostly for programs. It means you can run the application.

    These permissions are super important for security and organization. They stop you from accidentally deleting important system files, and they keep your personal documents private on a shared computer.

    A real-world example

    Let’s say you have a family computer. You create a folder called “Family Photos” and you want everyone to be able to see the pictures, but you’re the only one who can add new ones or delete old ones.

    • You would give everyone in the family read-only access.
    • You would give yourself read and write access.

    Simple as that.

    You’ve got this!

    So, there you have it. “Mounting” is just making your files available. “Access” is about who gets to do what with them.

    You don’t need to be a tech genius to understand these ideas. Just thinking about them in these simple ways can make you feel a lot more confident when you’re managing your digital world.

    Next time you plug in a hard drive or connect to a shared folder, you’ll know exactly what’s happening. You’re mounting it. And if you ever see an “access denied” message, you’ll have a much better idea of what it means.

    Welcome to the club. You’re officially in on the secret.

  • My Favorite Home Server Trick: The Mini PC + NAS Combo

    My Favorite Home Server Trick: The Mini PC + NAS Combo

    Want a powerful home server on a budget? Learn how to combine a new mini PC with your old computer as a NAS for the perfect, efficient setup.

    I have an old desktop computer sitting in my closet. You probably do, too. For me, it’s an old Dell tower that saw me through college. It’s slow, it’s big, and it feels wasteful to just throw it away. For a while, it was my makeshift home server, running Plex and a few other things. But it was noisy, power-hungry, and frankly, not very good at it anymore.

    Then I started looking at mini PCs. These little boxes are quiet, sip electricity, and pack a surprising punch. This led me to a thought: what if I could get the best of both worlds? What if I used a new mini PC for all the smart stuff, but kept my old computer around for one simple job: holding all my files?

    It turns out, this is a fantastic idea. You can absolutely pair a zippy new mini PC with a clunky old desktop, and it might just be the most budget-friendly, common-sense way to build a powerful home server.

    The Big Idea: Splitting Brains and Brawn

    Think of it like this: your home server does two main things. It runs applications (the “brains”), and it stores files (the “brawn”).

    • The Brains: This is stuff like Plex or Jellyfin (for your movie library), game servers, or backup software. These tasks need a decent processor (CPU) to work well, especially Plex when it has to transcode a video file for your phone or tablet. This is the perfect job for a modern mini PC.
    • The Brawn: This is just storage. It’s a bunch of hard drives holding your movie and music files. This task doesn’t require a fast processor at all. It just needs to be reliable and accessible. This is the perfect new job for your old computer.

    By splitting these jobs between two machines, you get some serious benefits. Your 24/7 server (the mini PC) uses way less electricity, and you get a much faster, smoother experience for your apps. All while reusing hardware you already own.

    So, How Does This Actually Work?

    The setup is simpler than you might think. We’re essentially turning the old PC into a Network Attached Storage, or NAS for short. A NAS is just a computer whose only job is to serve files to other devices on your home network.

    When you want to watch a movie, the process looks like this:

    1. You open Plex on your TV.
    2. Your TV tells your mini PC (where Plex is running) to play the file.
    3. The mini PC says, “Okay, I need that file.” It then requests it from your old PC over your home network.
    4. The old PC finds the movie file on its hard drive and sends it back to the mini PC.
    5. The mini PC streams it to your TV.

    The key to making this fast and seamless is your home network. You absolutely want to have both computers plugged directly into your router with Ethernet cables. Don’t try to do this over Wi-Fi. A standard gigabit wired network is more than fast enough to handle even high-quality 4K movie files.

    Your Two Paths to a DIY NAS

    Okay, you’re sold on the idea. How do you actually turn that old machine into a file-hoarding workhorse? You’ve got two main options.

    1. The Super Simple Way: A Windows Share

    This is the path of least resistance. You don’t have to install any new software on your old PC.

    • Just gather all your media files into a single main folder (e.g., D:\Media).
    • Right-click on that folder, go to “Properties,” then the “Sharing” tab, and set it up as a network share.
    • That’s it. Your mini PC can now see and access this folder over the network.

    2. The “Proper” Way: A Dedicated NAS Operating System

    This approach takes a little more work but gives you a much more powerful and robust solution. You’ll wipe the old computer and install a free operating system designed specifically for being a NAS.

    Two popular choices are TrueNAS CORE and OpenMediaVault (OMV).

    These tools give you a clean web interface to manage your drives, check on their health, and set up advanced features (like RAID, which protects you from a single hard drive failure). It’s the more professional route, and it’s what I’d recommend if you’re feeling a bit adventurous.

    Putting It All Together

    Once you have your old PC set up as a NAS (using either method), the final step is to make that storage easily accessible on your mini PC.

    On your mini PC (assuming it’s running Windows), you’ll want to “map” the network drive. This makes the storage on your old PC show up as if it were a local drive. You’d open File Explorer, go to “This PC,” and find the “Map network drive” option. You can assign it a letter like M: (for Media), and point it to the network share you created.

    Now, just tell your applications where to find everything. In Plex, you’ll add a new library and point it to your shiny new M: drive.

    And you’re done. You’ve successfully upgraded your home server, saved a bunch of money, and given that old computer a useful new lease on life. It’s a win-win-win.

  • So You Have Some Old Servers. Now What?

    So You Have Some Old Servers. Now What?

    Feeling curious about what’s possible with your own servers? This friendly guide helps you start your home lab journey and find cool, practical projects.

    So you’ve got that feeling.
    That little itch in the back of your brain that says, “I know how to use this stuff, but I don’t really get how it all works.”

    Maybe you’ve set up a home network. Maybe you’ve even got a couple of old servers from work sitting in a corner, waiting for a purpose. You see people online building these incredible digital playgrounds in their basements—home labs—and you want in. But you’re stuck on the first step: Where do you even begin?

    It’s easy to feel overwhelmed. You’re not just trying to follow a tutorial; you want to grasp the fundamentals. You want to move past copying and pasting commands and start truly understanding the logic behind them.

    If that sounds familiar, you’re in the right place. Let’s talk about how to get started on this journey.

    First, Forget “Mastery”

    The goal isn’t to become a master of everything overnight. That’s impossible and a sure path to burnout. The real goal is to build a foundation, one brick at a time. The best way to do that is to stop thinking about “learning networking” and start thinking about solving a problem.

    Your first project shouldn’t be some grand, abstract goal. It should be something tangible that makes your own life a little better or more interesting.

    Think about it:
    * Are you tired of ads on every device in your house?
    * Do you wish you had your own private cloud for files and photos, separate from Google or Apple?
    * Want to host a private Minecraft server for you and your friends?
    * Curious about automating your smart home devices?

    Pick one. That’s your starting point.

    Your Digital Workbench: The Hypervisor

    You mentioned having some servers, which is awesome. Old enterprise gear is perfect for a home lab. It’s built to run 24/7 and gives you tons of room to experiment.

    So, what’s the best way to set them up? My advice is to start with a hypervisor.

    Don’t let the name scare you. A hypervisor is just a lightweight operating system that’s designed to run other operating systems. Think of it like a digital workbench. Instead of installing one OS on your server, you install a hypervisor like Proxmox (it’s free and amazing for this).

    Once Proxmox is running, you can create Virtual Machines (VMs) and Containers (LXCs) with just a few clicks.

    • A Virtual Machine (VM) is exactly what it sounds like: a full, independent computer running inside your server. It has its own virtual hard drive, RAM, and network connection. This is perfect for when you need to run a completely separate OS, like Windows inside your Linux-based server.
    • A Container (LXC) is a more lightweight solution. Instead of virtualizing a whole computer, containers just wall off a piece of the main OS for a specific app. They’re faster, use fewer resources, and are perfect for running single applications like a web server or a database.

    Using a hypervisor is the key to a flexible lab. It lets you spin up a new VM to test an idea and tear it down if it doesn’t work, all without messing up your other projects. It’s the ultimate sandbox.

    Where to Find Cool Projects

    Okay, so you’ve picked a goal and set up your Proxmox workbench. Now for the fun part: finding amazing projects and the guides to get them running.

    The internet is overflowing with ideas, but here are a few places I always come back to:

    • Reddit Communities: The absolute best places for inspiration are subreddits like r/homelab and r/selfhosted. You’ll see what other people are building, the problems they’re running into, and the creative solutions they find. It’s a goldmine of real-world experience. r/datahoarder is another great one for storage nerds.
    • “Awesome” Lists on GitHub: Search GitHub for “awesome self-hosted” or “awesome-sysadmin.” You’ll find curated lists of incredible open-source software for almost any task you can imagine.
    • YouTube Tinkerers: Find creators who are actually doing the things you want to do. Channels like Techno Tim and Jeff Geerling (who you might already know) offer fantastic, step-by-step tutorials for specific projects.

    A Few Starter Project Ideas:

    • Block Ads Network-Wide: Install Pi-hole or AdGuard Home in a container. It’s a relatively simple project that provides an immediate, noticeable benefit.
    • Build Your Own Cloud: Set up Nextcloud. It’s like having your own private Google Drive, complete with photo backups, calendars, and contacts.
    • Stream Your Media: Install Plex or Jellyfin. You can load them up with your movies and TV shows and stream them to any device, anywhere.
    • Monitor Your Services: Run Uptime Kuma to get a cool dashboard that shows you if all your other services are online.

    The Real Secret: Just Start

    That’s it. That’s the secret.

    The difference between the person who dreams of a home lab and the person who has one is that the second person just started. They picked one small project, probably failed a few times, broke things, and learned something in the process.

    Your lab is a lab. It’s meant for experiments. Don’t be afraid to break it. With a hypervisor like Proxmox, the cost of failure is just a few clicks to delete a VM and start fresh. Every mistake is a lesson. Write down what you did, what went wrong, and what you learned.

    You don’t need the newest gear. You don’t need to know everything. You just need a little curiosity and the willingness to try. So go ahead, fire up that old server, and build something cool.

  • My Server Froze and Blamed Me: Cracking the NMI Code

    My Server Froze and Blamed Me: Cracking the NMI Code

    Facing the ‘Unrecoverable System Error (NMI)’ on your HP ProLiant server? Here’s a step-by-step guide to diagnosing and fixing this frustrating issue.

    I was so excited. I’d just gotten my hands on an HP ProLiant MicroServer Gen8. If you’re a home lab enthusiast, you know this little box is a legend. It’s compact, capable, and the perfect foundation for building a new setup. My plan was to run Debian 12 on it, maybe for a NAS, maybe for some containers. The possibilities felt endless.

    I got everything set up, installed a fresh copy of Debian, and let it idle. For the first day, everything was perfect. And then, it wasn’t.

    I walked over to the server to find the screen frozen. The cursor wasn’t blinking. Nothing. Even worse, the “Health LED” on the front was blinking a menacing red.

    My heart sank. The red light of doom.

    Chasing Ghosts in the Logs

    Thankfully, HP servers have iLO (Integrated Lights-Out), a fantastic tool that lets you manage the server remotely, even when it’s powered off or frozen. I logged into the iLO web interface and checked the “Integrated Management Log.”

    And there it was, in black and white:

    Unrecoverable System Error (NMI) has occurred.

    Right below it, another entry:

    User Initiated NMI Switch

    NMI stands for Non-Maskable Interrupt. In simple terms, it’s a hardware error so critical that the system can’t just ignore it. It’s the equivalent of your computer’s hardware screaming, “STOP EVERYTHING! Something is seriously wrong.”

    The “User Initiated” part was just weird. I certainly hadn’t pressed any magic NMI button (which, by the way, is a real thing on some servers for forcing a crash dump). It felt like the server was freezing and then blaming me for it.

    The First Suspect

    My first thought went to the newest component I’d added: a cheap SAS card I’d bought from AliExpress. It was an Inspur 9211-8i, which I was hoping to use for connecting a bunch of large hard drives. It seemed like the most likely culprit.

    So, I pulled the card out.

    I reinstalled a fresh copy of Debian 12 on an SSD connected to the server’s built-in ports and let it run. For about 24 hours, things were quiet. I thought I’d fixed it.

    Then, the red light started blinking again. Same freeze. Same NMI error in the logs.

    It wasn’t the SAS card. The problem was deeper.

    What Could It Be? A Process of Elimination

    This is the part of any troubleshooting process that can be either fun or maddening. You have to work through the possibilities, one by one.

    Here was my thinking:

    • It happens with the OS running. I noticed the server was stable if it was just sitting in the BIOS or stuck in a boot loop without an OS drive. The NMI error only happened after Debian was up and running for a day or two. This meant it was likely an issue with the OS interacting with a piece of faulty or incompatible hardware.
    • It’s not the storage controller. I’d ruled out the add-in SAS card, and the problem still happened with an SSD on the internal controller. While a bad SAS cable could theoretically cause issues, it felt less likely to be the root cause of such a critical system halt.
    • So, what’s left? The core components. The CPU and the RAM.

    The CPU was a Xeon E3-1220L V2, a solid processor for this machine. While not impossible, a CPU failure is relatively rare.

    That left the RAM. I was using two sticks of DDR3 ECC memory. The specs were correct, but it was non-HP branded RAM. And with servers, especially older ones like the Gen8, that can be a big deal. They can be incredibly picky about memory. Even if the specs—ECC, speed, voltage—all match, a tiny incompatibility in the module’s design can cause bizarre, intermittent errors just like this.

    The Path to a Solution

    An NMI error is almost always a hardware problem. While a software or driver bug can trigger it, the root cause lies in the physical components. Based on my experience, here’s the checklist I’d recommend to anyone facing this exact problem.

    1. Test Your Memory First. This is the number one suspect. Don’t just assume your RAM is good because it seems to work for a while. Download MemTest86+ and let it run for a full 24 hours. Intermittent RAM faults often don’t show up in a quick 1-hour test. If you can, beg, borrow, or buy a single stick of official HP-branded RAM for this server and see if the system is stable. If it is, you’ve found your culprit.
    2. Strip It Down. Go back to basics. Disconnect everything that isn’t absolutely essential. Run the server with just the CPU, one stick of RAM, and your boot drive. If the system is stable for a few days, start adding components back one at a time, with a few days of testing in between each addition.
    3. Check Your Temperatures. Use iLO to keep an eye on the system and CPU temperatures. Overheating can absolutely cause the system to trigger a protective NMI and shut down. Make sure your fans are spinning and the heatsinks are free of dust.
    4. Reseat Everything. It sounds too simple, but it works surprisingly often. Power down the server, unplug it, and physically remove and reseat the CPU, the RAM, and all power and data cables. A slightly loose connection can cause chaos.

    For me, the journey was a reminder that when you’re building with used or non-certified hardware, you’re sometimes in for an adventure. These cryptic errors aren’t just a roadblock; they’re a puzzle. And while it was frustrating, solving it piece by piece is what makes running a home lab so rewarding. The server isn’t just a tool—it’s a project.

  • Can You Plug a Power Strip Into a UPS? Let’s Talk About It.

    Can You Plug a Power Strip Into a UPS? Let’s Talk About It.

    Ever wondered if you can plug a surge protector into a UPS? Discover the real reasons why it’s a bad idea and what you should do instead to protect your tech.

    So you finally did it. You bought an Uninterruptible Power Supply (UPS) to keep your precious tech safe from power flickers and surges. Smart move.

    You unbox it, plug it in, and start connecting your gear. Your computer gets an outlet. Your monitor gets another. And then… you’re out of plugs. But you still have your router, your speakers, and a phone charger to connect.

    Sitting right there in your box of spare cables is a trusty power strip. It seems like the perfect solution. Just plug the strip into the UPS, and you’ve got five or six new outlets. Problem solved, right?

    Well, maybe not. If you’ve ever Googled this, you’ve probably seen a dozen forum posts all screaming the same thing: Don’t plug a surge protector into a UPS!

    But hardly anyone explains why. Most of the warnings are vague, talking about overloading or just saying “it’s bad.” I get it, that can be super frustrating when you’re just trying to understand your gear. So let’s actually break down the real reasons.

    It’s Not Just About Overloading

    Most people assume the big risk is plugging too much stuff into the power strip and overloading the UPS. And yes, that’s definitely a risk. A UPS is rated for a specific maximum wattage (or VA). A power strip makes it incredibly easy to plug in a bunch of devices that, when running all at once, could exceed that limit.

    But let’s assume you’re careful. You’ve done the math, and you know your total power draw is well under the limit. You’re in the clear then, right?

    Not quite. The main issue isn’t about the amount of power; it’s about how the power is managed.

    The Real Problem: Conflicting Protection

    Here’s the thing: most modern UPS units are more than just a big battery. They also have high-quality power filtering and surge protection built right in. They work constantly to provide clean, stable electricity to your devices.

    Surge-protected power strips also have their own protection circuitry. It’s usually a simple component called a Metal Oxide Varistor (MOV). An MOV’s job is to sit there and watch the voltage. If it sees a sudden, massive voltage spike (like from a lightning strike), it diverts that dangerous excess energy to the ground wire, saving your electronics.

    When you plug one into the other, you create a situation where these two systems can start fighting each other.

    Think of it like this: your UPS is trying to create a perfect, clean bubble of power. The power strip’s surge protector is designed to pop any voltage spikes. But because the UPS is already conditioning the power, its sensitive circuits can misinterpret the power strip’s MOV as a power anomaly or a wiring fault. This can cause a few problems:

    • It can wear out your UPS battery. The UPS might think something is wrong with the incoming power and switch to its battery unnecessarily, even when there’s no outage.
    • It can reduce the overall protection. In a real surge event, the two systems can interfere. The power strip might try to divert the surge, but the UPS might not react as it should because the strip gets in the way. In essence, they can make each other less effective.
    • It can send dirty power to your devices. The way a simple MOV works can introduce noise and fluctuations into the power line, which is the exact opposite of what your UPS is trying to achieve.

    You’ll Probably Void Your Warranty

    If the technical explanation isn’t convincing enough, here’s a much simpler one: virtually every UPS manufacturer will void your warranty if you do this.

    Companies like APC, CyberPower, and Tripp Lite are very clear in their manuals. They state that daisy-chaining surge protectors or power strips to the battery-backed outlets of a UPS is not supported. This also includes their connected equipment protection policy—that guarantee that they’ll replace your gear if it gets fried while connected to their UPS. If you’ve got a power strip plugged in, that policy is likely null and void.

    So, What’s the Right Solution?

    Okay, so you can’t use your surge protector. But you still need more outlets. What are you supposed to do?

    Luckily, the solution is simple.

    Use a basic, non-surge-protected power strip.

    These are often called “power distribution units” or PDUs in the data center world, but you can find simple versions for home use. They are essentially just extension cords with a bunch of extra outlets. They have no internal filtering, no MOVs, nothing. They just pass power through.

    This is the perfect solution. You let the UPS handle all the surge protection and battery backup, and the PDU simply gives you the extra outlets you need for low-power devices like your router, modem, or chargers.

    So, to recap:

    • Don’t plug a surge-protected power strip into the battery side of a UPS.
    • Do let your UPS handle the protection.
    • Do use a basic, non-surge-protected PDU if you need more outlets.

    It’s a small distinction, but it makes a big difference. It ensures your equipment is properly protected, keeps your warranty intact, and lets your expensive UPS do its job without any interference.

  • Older vs. Newer CPUs: A Surprising Look at Home Server Power Draw

    Older vs. Newer CPUs: A Surprising Look at Home Server Power Draw

    Thinking of using an older Xeon CPU for your home server? We compare a Broadwell Xeon vs. a modern Coffee Lake chip to see which is more power-efficient at idle.

    I spend a lot of time thinking about my home server. Maybe too much time. It’s a fun hobby, and like any good hobby, it sends you down some interesting rabbit holes. Recently, I got stuck on a question about power consumption.

    I have a pretty decent little server box running right now. It’s built around a 9th-gen Intel chip—a Coffee Lake processor with a good balance of cores and speed. It handles my TrueNAS setup and a couple of virtual machines without breaking a sweat.

    But I’m a tinkerer. And I have a spare motherboard for older, high-end server hardware sitting on a shelf. It’s an X99 board, which uses the LGA-2011-3 socket. This got me thinking. Could I swap my modern setup for an older, beefier server CPU? I was looking at Xeon processors from the “Broadwell” era, some with a massive 18 cores.

    The trade-off seemed simple enough: I’d lose some single-core speed, but I’d gain a ton of cores. For a server running multiple VMs, that sounds great, right?

    But then I hit the real question: what about power draw when the server is just… sitting there?

    The Big Deal About Idle Power

    For a machine that’s on 24/7, the power it uses while doing nothing is actually a huge deal. Most of the time, my server isn’t transcoding video or running complex calculations. It’s idling, waiting for a request.

    My goal was to keep the whole system humming along at under 50 watts, and ideally, under 40 watts at idle. This is where the comparison between a modern consumer chip (Coffee Lake) and an older server chip (Broadwell) gets really interesting.

    On paper, more cores running at a lower clock speed might seem efficient. But technology has come a long way.

    The Allure of Old Server Gear

    First, let’s admit it: building with used enterprise hardware is cool. You can get CPUs that once cost thousands of dollars for a tiny fraction of the price. The idea of having a 16 or 18-core beast humming away in a closet is tempting. For virtualization, more cores are almost always better.

    So, the idea of swapping my 8-core Coffee Lake chip for something like a Xeon E5-2690 v4 felt like a massive upgrade in multitasking power. I was fine with the single-core performance being a bit worse. My server tasks are spread out, not dependent on one super-fast core.

    But could this powerhouse system sip power gently when it wasn’t busy?

    Modern Tech Has a Secret Weapon

    Here’s the thing about newer CPU generations like Coffee Lake. The improvements aren’t just about raw speed. A huge amount of engineering has gone into making them incredibly efficient at doing nothing.

    Modern CPUs have very sophisticated “C-states,” which are sleep states that let them power down parts of the chip when they aren’t needed. They can drop into a very deep sleep almost instantly between keystrokes or mouse movements.

    Older platforms are just… not as good at this. And it’s not just the CPU. The motherboard chipset is a huge factor. The X99 chipset, which the Broadwell-E Xeons use, is known for being a bit of a power hog itself. It was designed for performance first, at a time when idle efficiency wasn’t the top priority for servers.

    So, the fight isn’t just “Broadwell CPU vs. Coffee Lake CPU.” It’s “Broadwell CPU + X99 Platform vs. Coffee Lake CPU + Modern Platform.”

    So, What’s the Likely Outcome?

    After digging around and looking at what other home labbers have experienced, a clear picture started to form.

    • The Coffee Lake System: A system with a 9th-gen Intel CPU, a modern motherboard, and a couple of drives can easily idle in the 20-35 watt range. That’s incredibly low.
    • The Broadwell-E System: Getting an X99-based system with a high-core-count Xeon to idle below 50 watts is a real challenge. It’s not impossible, but it’s tough. Most setups seem to idle in the 50-80 watt range. That extra 30-40 watts of idle power draw adds up quickly over a year of 24/7 operation.

    The higher core count of the Broadwell Xeon is amazing, but you pay for it with a much higher power “floor.” The system just uses more energy to exist, even before you ask it to do any real work.

    My Final Verdict: Sticking With Modern

    In the end, I decided to stick with my current Coffee Lake setup.

    While the siren song of 18 cores was tempting, the practical reality of the higher idle power was a dealbreaker for me. My server spends 95% of its life idling. That power floor matters more than the performance ceiling. For my use case—a storage server with a couple of lightweight VMs—the modern chip is simply the more sensible, and cheaper, choice to run long-term.

    If I had a workload that constantly hammered all the cores, the math might be different. But for a typical home server, it turns out that newer, even with fewer cores, is often the smarter path. It’s a great reminder that progress isn’t always about the biggest numbers, but often about the quiet efficiency happening in the background.