# M. Hamzah Khan's Personal Blog - Self-proclaimed genius, and ruler of the Internet. DevOps Engineer the rest of the time. > M. Hamzah Khan's Personal Blog is a technically rich and personally curated space focused on DevOps, self-hosting, homelab architecture, Kubernetes, CI/CD workflows, Linux system administration, Home Automation and 3D printing. -------------------------------------------------------------------------------- title: "Parenting Like a DevOps Engineer: Managing the Chaos of Family Life" date: "2025-06-12" url: https://www.hamzahkhan.com/parenting-like-a-devops-engineer-managing-the-chaos-of-family-life/ -------------------------------------------------------------------------------- Father's Day just passed, which got me thinking—not just about fatherhood in general, but how *weirdly* useful my job as a DevOps engineer has been in helping me parent. I have three kids: two sons (8 years old, and 6 years old), and one daughter (4 years old). They're amazing, unpredictable, and chaotic—kind of like a Kubernetes cluster that's constantly in flux, demanding constant monitoring, quick rollbacks, and a whole lot of automation to keep from spiralling into an unmanageable mess. I'm not the world's greatest parent. Far from it. But I'm learning. Slowly. And somewhere between incident response and bedtime battles, I've realised that parenting, like DevOps, is mostly about managing chaos, making tiny, incremental improvements and iterating on what works. Just like in DevOps, the key to a happy home is good 'observability' – mainly through the faint sounds of mischief from the other room. A few months ago, my six-year-old began resisting going to school. Each morning turned into a dramatic struggle. When we asked him why he didn't want to go, he would simply shrug and mumble, "I don't like it." Unfortunately, that didn’t provide us with much actionable information. In engineering, when problems arise, we start by gathering context. We don’t jump to conclusions; instead, we observe and investigate. So, one day, I invited him into my home office—my safe space—and told him it was our safe space now. "In here," I explained, "we're friends who can talk about anything, from the silliest thing to the craziest. Just us. No pressure." He sat quietly in the chair beside me for a while. Then, finally, he said: > "I don't like school because… I don't know how to talk to the other kids." That hit me hard. He wasn't being defiant; he was simply overwhelmed. It particularly resonated with me because it was an issue I struggled with as a child too. From there, we were able to speak to his teacher, who gently helped him integrate into games with other children. Now that he has friends, he actually looks forward to seeing them. That breakthrough happened not through interrogation but through observability and patience. Using my home office as our safe space has now become a regular occurrence. Strangely, this technique of establishing a room as our "safe space" doesn't work for my wife. ## 🔁 Blameless Postmortems (Even When Homework is Due Tomorrow) DevOps culture teaches us to run blameless retrospectives after incidents. Not because we don't care about what went wrong but because assigning blame prevents learning. My 8-year-old has a bad habit of revealing school projects the night before they're due. No matter how often we ask him, "Any homework?" he'll respond with an Oscar-worthy performance of "Nope." Then, at 8:00 PM on a Thursday: "Oh yeah, I need to make a cardboard Roman sword and write about it." The old me would've panicked or scolded. But now, I try to treat it like a retro: What were the signals we missed? How can we improve visibility? Do we need a new "homework alerting system" (also known as a whiteboard on the fridge)? We still get frustrated. But now it's frustration aimed at the system, not the child. ## 🧭 Observability: Beyond the Logs (and into the Babychinos) With my youngest, she's four—things are different. She's in the plushie-and-babychino phase of life, so we go on "coffee dates" together. I get a double espresso latte; she gets a babychino and a cinnamon swirl, and we just… sit. She talks about Barbie, how she wants to be a ballerina, and how she wants a real pet sheep she'd call 'Baa-llerina' because it's a sheep, and sheep say "baa," and she likes ballet. She doesn't say, "Dad, I'm feeling emotionally disconnected and would benefit from some focused one-on-one time." But I've learned to watch the metrics: her mood shifts, clinginess, eye contact, sleep patterns. You get better at reading logs when you stop waiting for alerts. Parenting isn't just about reacting to tantrums—it's about noticing subtle changes and responding early. Observability at home? It's empathy, finely tuned with instrumentation. ## 🤖 Automation: The Bedtime Pipeline (and Beyond) In DevOps, we obsess over automation. Why? Because it reduces friction, ensures consistency, and frees up our engineers for more complex, creative work. Turns out, the same principle applies when you're trying to get three small humans from hyperactive to horizontal. Our bedtime routine, for example, is a finely tuned, automated pipeline: Dinner, PJs, brushing teeth, using the toilet, stories, cuddles, and lights out. When it works, it's beautiful. Each step flows into the next, reducing decision fatigue for both us and the kids. They know what's coming, which minimises resistance. We know what's coming, which minimises parental meltdowns. It's not just bedtime; it works for the morning routine before school or even just having designated spots for shoes and backpacks – these are all tiny automations. They're like mini-scripts running in the background of our family life, reducing cognitive load and preventing us from constantly having to "manually deploy" every single task. When the system is automated, we have more time and energy for unexpected 'incidents' – like explaining for the fifth time why we can't have a pet unicorn. ## 🔄 Continuous Integration and Daily Stand-Ups In engineering, Continuous Integration refers to the practice of frequently merging code into a shared project repository. This approach includes automated builds and tests that detect issues early on, assisting in the identification of conflicts before they develop into major problems. My wife and I may not be merging lines of code, but we are continually integrating our parenting approaches. We represent two distinct 'branches' of the same 'project,' and if we don’t regularly synchronize, we risk encountering merge conflicts that affect the entire 'system' (i.e., the kids). Our daily stand-up usually happens over breakfast or after the kids are asleep. We ask questions like, "How was school pickup?" "Did you talk to him about the math homework?" and "She seems a bit quiet or clingy today; is something wrong?" These are not formal meetings but quick and important check-ins. We share what we notice, align our responses to new behaviors, and bring up any potential issues before they escalate. This keeps our family approach—our shared way of parenting—consistent and harmonious. When we are not on the same page, things become chaotic. One parent says yes, the other says no, and suddenly, our perfectly crafted 'deployment' (e.g., getting everyone out the door on time) grinds to a halt. CI, even in parenting, makes for a smoother operation. ## 🧩 The Monolith vs Microservices Debate (aka Marriage) My wife and I parent in very different ways. She's not an engineer. She doesn't think about "event-driven architecture" or "incident response timelines." Her approach is more intuitive, relational, and deeply human. At first, this led to some friction. Why didn't she want to optimise bedtime flow with a Kanban board? Why didn't I just *feel* that someone was about to have a meltdown? But over time, I've realised that our differences are a feature, not a bug. We balance each other out. Like a good system composed of microservices and a stable monolith—you need both agility and cohesion. Flexibility and structure. Love and logic. We're both debugging this system in real-time, just using different tools. ## 🕹 When Roblox Becomes Pair Programming I don't particularly enjoy Roblox. The games are confusing, and they give me motion sickness like I just went on a roller coaster. But my 6-year-old loves it. He *lights up* when we play together. The other day, he tried to explain a game to me. I nodded along, trying not to feel sick while hiding from "Scary Larry." He laughed at how lost I was. I was confused but still there. This is what matters. The primary objective of pair programming is to write better code and share knowledge. However, its real strength is in the teamwork and connection built during the process. Similar to Roblox, the most valuable result isn’t always what shows up on the screen. ## 🙃 Closing Thoughts DevOps didn't make me a perfect parent, but it gave me a mindset: one that values systems thinking, curiosity, and resilience. And fatherhood made me a better engineer, too. It taught me that no system—technical or human—responds well to blame. That emotional outages need graceful recovery. So, this Father's Day, I'm not celebrating my success. I'm celebrating the debugging process. The retros. The messy commits. The half-working prototypes. And the three little humans who remind me daily that parenting is the most complex system I'll ever help build. -------------------------------------------------------------------------------- title: "Reviving My Broken Ender 3" date: "2025-05-25" url: https://www.hamzahkhan.com/reviving-my-broken-ender-3/ -------------------------------------------------------------------------------- ### Introduction In my [previous post](/starting-my-3d-printing-journey-in-2025-with-the-ender-3), I introduced the two second-hand Ender 3 printers I recently acquired. One was completely stock and fully operational, while the other had a variety of upgrades — but was non-functional. I decided to take on the challenge of restoring and improving the broken Ender 3, ordering a range of replacement parts that would not only get it printing again but also enhance its performance. Here’s the configuration the broken printer came with: - **BigTreeTech SKR Mini E3 v2** ([Amazon](https://amzn.to/3Z5erXO) / [AliExpress](https://s.click.aliexpress.com/e/_okZNFcr)) – A 32-bit control board which works great with Klipper. - **[Microswiss Direct Drive Extruder and All-Metal Hotend](https://amzn.to/44QRATU)** – Great for flexible filaments and better thermal control. - **[Creality Dual Z-Axis Upgrade Kit](https://amzn.to/45q6kJt)** – Helps maintain gantry level and improves layer consistency. - **[Upgraded Bed Springs](https://amzn.to/3GLKW7h)** – Simple but essential for better bed stability and reduced releveling. - BLTouch Bed Probe - This is a automatic bed leveling probe. I'm not sure if it's a genuine BLTouch, or a clone. To bring it back to life, I ordered the following components: - **[Triangle-Lab 50W Heater Cartridge](https://s.click.aliexpress.com/e/_oDxItx9)** - 25% more power than stock (40W → 50W) for quicker heat-up times. - **[ATC Semitec 104GT-2 Thermistor](https://s.click.aliexpress.com/e/_oEmkP7V)** - High accuracy, faster thermal response, and rated up to 300°C. - **[Mellow 3D Heater Block + Silicone Sock](https://s.click.aliexpress.com/e/_ond5jyn)** - Not a massive upgrade, just a necessary part of the hotend assembly! - **[Creality Hardened Steel Nozzle (0.4mm)](https://s.click.aliexpress.com/e/_opYcJYP)** - Ideal for abrasive filaments and long-term durability. - **[BIQU Microprobe v2](https://s.click.aliexpress.com/e/_oBfTxPl)** - Whilst the BLTouch is fine, I will discuss my reasons for switching later in this post. While I’m still waiting on the hardened steel nozzles to arrive, I decided to move forward using some [standard brass 0.4mm nozzles](https://amzn.to/4mtylWA) for now. Since I’ll be printing primarily in PLA, they’ll do just fine. ![My fixed Ender 3 printing a benchy with Red PLA!](/images/2025/reviving-my-broken-ender-3/Ender_3_Benchy.png) ### The Plan: Breathing Life Into a Dead Printer As mentioned in my [previous post](/starting-my-3d-printing-journey-in-2025-with-the-ender-3), this particular Ender 3 originally belonged to my brother. He picked it up second hand shortly after I got my working (but stock) Ender 3. However, he struggled to get reliable prints out of it and eventually gave up—replacing it with a Bambu Labs A1 for that “it just works” 3D printing experience. I gladly took the broken Ender 3 off his hands with a goal: to fix it and make it better than ever. ![A failed print my brother sent me](/images/2025/reviving-my-broken-ender-3/Ender-3-Failed-Print.png) When the printer arrived, I ran into similar issues: inconsistent extrusion and frequent nozzle clogs. After removing the stock Ender 3 cooling shroud, the problem became clear. The hotend was covered in burnt-on filament, which was leaking from the heat break indicating that the hotend was assembled incorrectly. The thermistor wire had nearly been severed by an overtightened screw, and the heater cartridge wiring was in rough shape. > 💡 I strongly suspect the compromised thermistor was a major contributor to the inconsistent extrusion. My initial plan was simple: swap out the heat block, heater cartridge, and thermistor. But when I tried to remove the heat break, I accidentally damaged the threads on the Microswiss titanium heat break. It was cheaper to buy a Microswiss clone, than to buy a real Microswiss replacement heat break, and so I ordered the [Triangle-Lab All-Metal Hotend](https://s.click.aliexpress.com/e/_oD2pYRh). At the time, I thought this Triangle-Lab kit didn’t include a heater block, so I also ordered a [Mellow 3D heater block with a sillicone sock](https://s.click.aliexpress.com/e/_ond5jyn). To my surprise, the Triangle-Lab kit actually came with a heater block and silicone sock included, so I won't be using the Mellow-3D parts for now. They can just go into my spare parts box. This next part might be a little controversial... Initially, I considered using the Triangle-Lab heat break (and heat block) with the original Microswiss heatsink. But ultimately, I opted to swap out the entire assembly and go all-in with the Triangle-Lab hotend. My reasoning? Clean integration. Mixing parts from different manufacturers just felt mess y— so I decided to standardise on a single brand for the entire hotend setup. I don’t believe there’s a meaningful performance difference between the Microswiss and Triangle-Lab heatsinks, I'm not too sure of the best way to confirm this. The Microswiss heatsink is slightly taller, but according to my basic kitchen weight scale they weigh around the same (13 grams). I wanted things to be tidy, so going with a single brand for the whole hotend setup made sense to me. ![Ender 3 Hotend leaking filament from the heatbreak](/images/2025/reviving-my-broken-ender-3/Ender-3-Leaking-Hotend.png) ### Installing the All-Metal Hotend Installing the new all-metal hotend was a fairly straightforward process. I began by carefully removing the old hotend assembly, including the damaged heater cartridge and thermistor. Once those components were out, I routed the new wires through the Ender 3's toolhead loom and connected them to the BTT SKR Mini E3 v2 control board. If you’re unfamiliar with this process, this video offers a great visual guide for removing and installing hotend components on a slightly newer Ender 3 model: {{< youtube UpyiLT-29js >}} Because the [Triangle-Lab All-Metal Hotend](https://s.click.aliexpress.com/e/_oD2pYRh) is a clone of the Microswiss design, I used the official Microswiss video guide for assembling the new hotend and mounting it to the carriage: {{< youtube ub9XLMcDdy4 >}} > 🛠️ **Hot Tip**: When assembling the hotend, screw the heat break fully into the heater block so it sits flush—don’t over-tighten yet. Next, screw in the nozzle until it contacts the heat break inside. You’ll leave a small visible gap between the nozzle and the block at this stage. Once the hotend is mounted and you’ve powered up the printer, you’ll heat it up to operating temperature and tighten the nozzle fully with a spanner. This ensures there’s no gap between the nozzle and the heat break, which is crucial for avoiding leaks and maintaining consistent extrusion. This video provides an excellent explanation of the entire process: {{< youtube ATs0Ob0qB7k >}} ![Ender 3 Hotend leaking filament from the heatbreak](/images/2025/reviving-my-broken-ender-3/Microswiss-DD-with-TriangleLabs-Hotend.png) ### Installing the BIQU Microprobe v2 Although this Ender 3 came with a working BLTouch probe, I decided to swap it out for a **[BIQU Microprobe v2](https://s.click.aliexpress.com/e/_omprobe)** — the same one I installed on my other Ender 3 build. I’ve had a great experience with it so far, and I liked the idea of having matching probe setups on both printers for easier configuration and consistency in my Klipper configuration. #### Why I Switched From BLTouch While the BLTouch is a solid and popular option, the Microprobe v2 offers a few key advantages: - **Improved accuracy**: The Microprobe delivers more consistent and precise probing results. While I’m not printing ultra-high-precision parts (yet), having repeatable mesh leveling is still beneficial. - **Lower weight**: This was the real motivator. The Microprobe is significantly lighter than the BLTouch, which is important for printhead dynamics — especially since I plan to upgrade to a **HeroMe Gen7** cooling system in the future. That system adds weight, so every gram saved elsewhere helps maintain print speed and quality. #### Mounting Considerations Currently, I’m using the Satsana fan shroud, which works well, but isn’t designed specifically for the much smaller Microprobe. To compensate, I added a simple spacer to ensure the probe tip triggers before the nozzle touches the bed. Once I move to the HeroMe Gen7, this will be much cleaner—there are already community-made HeroMe adapters specifically designed for the Microprobe v2. ![Ender 3 Satsana fan shroud with BIQU Microprobe v2 Installed](/images/2025/reviving-my-broken-ender-3/Microprobe-v2-Installed.png) ### Printing Enclosures for the BTT SKR Mini and Raspberry Pi When I received this Ender 3, it didn’t come with an enclosure for the **BTT SKR Mini E3 v2** board—leaving all the wiring and components completely exposed. Aside from the visual clutter, exposed electronics are a long-term reliability concern, so I decided to design a clean enclosure setup. There are plenty of great printable enclosure options out there. One of my early favorites was [this all-in-one rear-mounted enclosure](https://www.thingiverse.com/thing:3688967), which houses both the control board and the Raspberry Pi. However, I was planning to move the **power supply to the rear of the printer** using [this mod](https://www.printables.com/model/618649-ender-3-power-supply-mount), and I wanted a front-mounted enclosure to pair with that layout. Eventually, I found a fantastic solution: [SKR Mini E3 + Raspberry Pi Front Enclosure (Printables)](https://www.printables.com/model/464881-skr-mini-e3-v3-raspberry-pi-front-housing-for-ende) This enclosure checked all the boxes for me: - ✅ Compatible with the **BTT SKR Mini E3**. - ✅ Includes a separate housing for the **Raspberry Pi** running Klipper. - ✅ Provides a clean cable pass-through between enclosures — perfect for switching from USB to UART communication, which I also done while tidying everything up. - ✅ Thoughtful ventilation: Both enclosures have front-facing vents and space for a bottom-mounted exhaust fan. > 💡 Most stock Ender 3 cases have a top-mounted fan, which can suck in dust and bits of filament. This design moves the fan to the bottom, where it’s less exposed—solving a long-standing annoyance elegantly. I’ll likely be printing this same setup for my other Ender 3 soon becuse I like it that much. I ran out of black filament whilst printing the lid for the right hand enclosure for the Raspberry Pi, but I will print a replacement when I have more black filament. I'm also still awaiting a buck converter so I can get rid of the Micro-USB cable going into the Rasberry Pi enclosure. ![Ender 3 with new electronics enclosure installed along with enclosure for the Raspberry Pi running Klipper](/images/2025/reviving-my-broken-ender-3/Ender-3-new-enclosures.png) I printed the enclosures with [Elegoo Black PLA+](https://elegoo.sjv.io/qzkLBL) with 20% infill and no supports. ### Updating My Klipper Configuration With the hardware upgrades completed, it was time to update the **Klipper firmware configuration** to match the new thermistor, hotend, and probe setup. Here’s how I brought everything together: #### Updating Thermistor Configuration Since I replaced the stock thermistor with the [ATC Semitec 104GT-2](https://s.click.aliexpress.com/e/_oEmkP7V), I had to update the `[extruder]` section in my `printer.cfg` to ensure accurate temperature readings: ```ini [extruder] ... sensor_type: ATC Semitec 104NT-4-R025H42G max_temp: 300 ... ```` Setting the correct `sensor_type` ensures temperature accuracy, critical for both print quality and safety. I also increased the `max_temp` to 300°C, which aligns with the specs of both the thermistor and all-metal hotend. #### PID Tuning **PID tuning** (Proportional-Integral-Derivative tuning) helps your printer maintain a stable and consistent nozzle temperature, especially after changing components like the heater cartridge or hotend. Since I mostly print PLA at 200°C, I ran the following command: ```bash PID_CALIBRATE HEATER=extruder TARGET=200 ``` After the calibration finished, I saved the new values to the configuration: ```bash SAVE_CONFIG ``` This step significantly reduces temperature fluctuations, which helps avoid under-extrusion or stringing. #### Configuring the BIQU Microprobe v2 I replaced the existing BLTouch with the BIQU Microprobe v2, which required removing the `[bltouch]` section and adding the following probe configuration: ```ini [gcode_macro PROBE_DOWN] gcode: SET_PIN PIN=probe_enable VALUE=1 [gcode_macro PROBE_UP] gcode: SET_PIN PIN=probe_enable VALUE=0 [output_pin probe_enable] pin: PA1 value: 0 [probe] pin: ^!PC14 deactivate_on_each_sample: False x_offset: -49 y_offset: -9 samples: 3 samples_tolerance: 0.05 samples_tolerance_retries: 3 activate_gcode: PROBE_DOWN G4 P500 deactivate_gcode: PROBE_UP ``` > 🔧 *Tip: You’ll also need to configure other sections like `[safe_z_home]`, `[bed_screws]`, and your virtual Z endstop to fully integrate the Microprobe.* #### Re-run Input Shaper Calibration **Input shaping** helps reduce vibrations during fast movements by adjusting the motion planning to cancel out resonance. This results in cleaner prints with fewer artifacts like ghosting or ringing—especially at higher speeds. Since I made significant hardware changes to the hotend—including weight distribution and mounting—I re-ran the input shaper calibration to ensure optimal performance. I'm using a [Mellow ADXL345](https://s.click.aliexpress.com/e/_oEQFOkb) accelerometer for this setup. This video does a great job of explaining what input shaping is and how to set it up in Klipper: {{< youtube Fe_BFGg_ojg >}} ### The Test: Benchy Time With everything installed and configured, it was finally time to test the newly refurbished Ender 3 by printing a classic Benchy. Since the printer was non-functional when I first got it, I unfortunately don’t have any “before” prints to compare the results with—but I can confidently say that I am very happy with the results. That’s a huge win after all the hardware issues this machine had. The Benchy print validated: - The new hotend and thermistor setup is working perfectly - The Microprobe is giving accurate bed leveling - The PID tune is keeping temps rock solid ![Benchy!](/images/2025/reviving-my-broken-ender-3/Benchy-1.png) ![Benchy!](/images/2025/reviving-my-broken-ender-3/Benchy-2.png) ### Final Thoughts: A Printer Reborn After a decent amount of hardware troubleshooting, replacement, and configuration, I’m happy to say this Ender 3 is now officially back from the dead. I'm genuinely happy with how this build turned out. I'm happy with the print quality, and the printer has been reliable, and perhaps just as importantly—the wiring is finally cleaned up. That mess of loose cables from the original setup lacking any enclosure for the control board was bothering me more than I expected, so getting everything enclosed and routed properly is a huge quality-of-life win. As for the new hotend setup: the Trianglelab all-metal hotend has been working flawlessly so far. That said, I don’t have a whole lot to compare it to other than my other Ender 3, which is still running a stock Bowden setup—and to be fair, the stock hotend has always worked fine for me. The main issue here wasn’t the design of the Microswiss or stock hotend; it was simply the state this printer arrived - not working at all. One thing I *am* especially enjoying is the Microswiss Direct Drive extruder that this printer came with. It makes filament swaps way easier. I’m considering switching my other Ender 3 to a direct drive system too, although I'm still researching the options. Since I bought a 2-pack of heater cartridges and thermistors, I’ll be upgrading the thermistor and heater on my second Ender 3 soon. I’m also planning to install a Trianglelab hotend on that machine to standardize the setup between the two printers and make maintenance and tuning easier across the board. ### What’s Next? I’ve already got a short wishlist of upgrades I want to tackle next on this machine: - **Magnetic Build Plate:** I'm using the stock glass build plate that comes with the Ender 3 at the moment. It can be difficult to remove prints from it sometimes. A magnetic build plate makes it a lot easier, as you can flex the removable part of the build plate allowing the print to just pop off. - **Linear Rails:** I want to switch the X and Y axes over to linear rails. These offer improved rigidity and precision, especially for faster print speeds. They eliminate slop and flex that you might get from standard V-wheels, and should lead to more consistent layer alignment and smoother surface finishes. - **Oldham Couplers + POM Nut for Z-Axis:** Oldham couplers help eliminate **Z-wobble** by decoupling lateral movement from vertical movement, which can occur when the lead screw isn’t perfectly straight. Pairing that with a **POM anti-backlash nut** will help reduce any vertical play in the Z-axis, giving cleaner walls on tall prints. I think after this point I will probably stop with any further modifications for a while... or at least that's what I currently think. ### A Different Kind of Fun This whole project has been a great reminder that 3D printing isn’t just about the objects we create—it’s also about the machines themselves. While designing and printing things is fun, there’s a totally different kind of satisfaction in fixing and upgrading a piece of hardware with your own hands. There’s something uniquely rewarding about working on a **physical machine**, seeing real improvements, and learning how all the components interact. In my day job as a DevOps engineer, most things have moved to "the cloud", and I rarely touch real physical hardware. It's something that I do miss, so this has been a lot of fun. Honestly, it’s got me itching for the next project... I’ve been thinking of diving deeper into the rabbit hole and building a **Voron 0.2**. Coming soon? 😁 ### Affiliate Links - [BIGTREETECH SKR Mini E3 V3.0 (Amazon)](https://amzn.to/3Z5erXO) - [BIGTREETECH SKR Mini E3 V3.0 (AliExpress)](https://s.click.aliexpress.com/e/_okZNFcr) - [BIQU Microprobe v2 (AliExpress)](https://s.click.aliexpress.com/e/_oBfTxPl) - [Triangle-Lab 50W Heater Cartridge (AliExpress)](https://s.click.aliexpress.com/e/_oDxItx9) - [ATC Semitec 104GT-2 Thermistor (AliExpress)](https://s.click.aliexpress.com/e/_oEmkP7V) - [Mellow 3D Heat Block + Silicone Sock (AliExpress)](https://s.click.aliexpress.com/e/_ond5jyn) - [Creality Hardened Steel Nozzle (AliExpress)](https://s.click.aliexpress.com/e/_opYcJYP) - [Triangle-Lab All-Metal Hotend (AliExpress)](https://s.click.aliexpress.com/e/_oD2pYRh) - [Mellow 3D ADXL345 Accelerometer (AliExpress)](https://s.click.aliexpress.com/e/_oEQFOkb) - Elegoo PLA+ ([UK Store](https://elegoo.sjv.io/c/6223541/2118843/19663) / [US Store](https://elegoo.sjv.io/c/6223541/2001678/19663) / [CA Store](https://elegoo.sjv.io/c/6223541/2001680/19663) / [EU Store)](https://elegoo.sjv.io/c/6223541/1930764/19663)) -------------------------------------------------------------------------------- title: "Starting My 3D Printing Journey in 2025 with the Ender 3: Klipper, Tinkering, and Endless Upgrades" date: "2025-05-07" url: https://www.hamzahkhan.com/starting-my-3d-printing-journey-in-2025-with-the-ender-3/ -------------------------------------------------------------------------------- ### Introduction I've been fascinated by 3D printing for years, but I always hesitated—would I actually use it enough to justify the cost? It's easy to get swept up in the hype of new hobbies, only to let gadgets gather dust. Recently, while browsing Facebook Marketplace, I stumbled across a deal that seemed too good to pass up — an original stock Ender 3 with a few spools of PLA and TPU for just £50. I decided to jump in, figuring that at this price, it would be a low-risk way to test the waters. And so, my 3D printing journey began. In this post, I'll share my early experiences with the Ender 3, from unboxing to first prints (and a few mishaps along the way). Whether you're a fellow beginner or just curious about 3D printing, I hope my story helps you take the next step—or at least enjoy the ride! ![My new (to me) Ender 3, ready for its first print—with my daughter's curious fingers sneaking into the frame](/images/2025/starting-my-3d-printing-journey-in-2025-with-the-ender-3/stock-ender-3.jpg) ### Flashing Klipper and Upgrading the Stock Board Like many hobbyists, I'd heard whispers of [Klipper](https://www.klipper3d.org/) — a firmware that promises smoother prints and faster speeds by offloading processing to an external, more powerful computer, eg a Raspberry Pi. As someone who'd happily install custom firmware on a toaster (if given the chance), the idea was irresistible. So, naturally, I dove straight into flashing it… *before even verifying if the printer worked*. **Spoiler:** This was not my wisest move. A test print first would've been smart. But where's the fun in caution? #### The Klipper Installation Process 1. Flashing the Stock Board: - I used an Arduino Uno as a programmer for the Ender 3's 8-bit board. - The process was straightforward (plenty of guides exist, so I'll spare the details). 1. Setting Up Mainsail: - Loaded [MainsailOS](https://docs-os.mainsail.xyz/) onto a spare Raspberry Pi 3 B+. - Started with Klipper's default [Ender 3 config](https://github.com/Klipper3d/klipper/blob/master/config/printer-creality-ender3-2018.cfg), which worked well out of the box. #### First Print: Cali Cat (Because Why Not?) My first print on the newly Klipperized Ender 3, I printed a [Cali Cat](https://www.thingiverse.com/thing:1545913) — To be honest, it wasn't for calibration as I didn't really know what I was looking for. It was purely because my four year old daughter was very interested in the printer, and I figured she can put the Cali Cat into her doll house! ![My first Klipper-powered print:Cali Cat](/images/2025/starting-my-3d-printing-journey-in-2025-with-the-ender-3/calibration-cat.jpg) I was lucky, the hardware was working perfectly - I was quite nervous as I hadn't done a test print prior to flashing it with Klipper. I was quite happy with the Cali Cat print quality. ### Upgrading the Control Board and Adding a Bed Leveling Probe The stock Ender 3 control board worked fine, but the noisy stepper drivers were driving me *slightly* insane — especially when printing overnight. (Who enjoys a symphony of whirring motors outside their bedroom at 2 AM?) #### Why Upgrade the Control Board? - **Noise Reduction**: The original 8-bit board's stepper drivers were loud enough to hear rooms away. - **Expandability**: A modern board opens doors for upgrades — like adding an auto bed leveling probe. #### Choosing the BIGTREETECH SKR Mini E3 V3.0 After researching, I settled on the BIGTREETECH SKR Mini E3 V3.0 ([Amazon](https://amzn.to/3Z5erXO)/[AliExpress](https://s.click.aliexpress.com/e/_okZNFcr)) because: - **Drop-in replacement** — uses the original screw holes and connectors. - **Silent TMC2209 drivers** — goodbye, screechy motors! - **Klipper-friendly** — flashing was easier than flashing the stock board. The result? Near-silent operation. Now, the only noise is the fans - a *massive* quality-of-life improvement. #### Upgrading the Bed Springs (Because Manual Leveling is Pain) The stock bed springs are known to be unreliable, forcing frequent re-leveling with the infamously finicky paper method. I swapped them for [stiffer yellow springs and aluminum wheels](https://amzn.to/3GLKW7h), which helped—but I still wanted a true fix: auto bed leveling. #### Adding the BIQU Microprobe v2 I opted for the [BIQU Microprobe v2](https://s.click.aliexpress.com/e/_oBfTxPl) over a BLTouch for two reasons: - **Higher accuracy** (theoretically - I'm not too concerned about this, as I don't think anything I print will need that level of accuracy). - **Lighter weight** — As I discuss later, I plan on looking into part cooling upgrades, which add weight to the tool head, so keeping the probe as light as possible is ideal. - Works great with the SKR Mini E3 v3! The installation and configuration process was supposed to be pretty straight forward. This video describes the process pretty well: [BIQU Microprobe and SKR Mini E3 V3 upgrade on the Ender 3! ](https://www.youtube.com/watch?v=RyT8wZLXwBQ) But unfortunately, for me, on the first test print, the probe slammed into the print bed, bending the probe pin and scratching the surface of the print bed as well. I didn't notice that the retractable pin was half-unscrewed during shipping, so it didn't retract fully after it had finished the adaptive bed mesh probing. I managed to straighten the pin enough to get it working but unfortunately, it still occasionally sticks and I haven't been able to straighten the pin completely. A replacement is likely in my future. Always inspect new hardware *before* trusting it with your print bed's safety! ### Dialing in Input Shaping with the Mellow ADXL345 One of Klipper's most game-changing features is input shaping - a software technique that reduces ringing (those ghostly echoes on sharp corners) without sacrificing print speed. After reading how Voron and Bambu printers use accelerometers for precision tuning, I decided to add a [Mellow ADXL345](https://s.click.aliexpress.com/e/_oEQFOkb) to my setup. #### Why an Accelerometer? - **Science over guesswork**: Instead of manually tweaking values, the ADXL345 measures your printer's actual resonance frequencies - **Printer-specific tuning**: Every machine vibrates differently based on frame rigidity, mass distribution, etc. - **Future-proofing**: Essential for pushing speeds beyond 100mm/s reliably The Mellow 3D Github page provides the Klipper configuration required: [https://mellow-3d.github.io/fly_adxl345_usb_klipper_config.html](https://mellow-3d.github.io/fly_adxl345_usb_klipper_config.html). I would recommend this video which explains how input shaping works, and how to configure it in Klipper: [You're NOT getting the MOST out of Input Shaper ](https://www.youtube.com/watch?v=Fe_BFGg_ojg) ### Part Cooling Upgrade: From Stock to Satsana While browsing Ender 3 upgrade videos, I kept seeing these sleek, heavily modded toolheads. Other than looking cool, most of these modded toolheads were designed to improve part cooling. The [HeroMe Gen7](https://www.printables.com/model/39322-hero-me-gen7-platform-release4) system caught my eye, but it demanded: - Multiple complex printed parts (some needing heat-resistant filaments) - New fans - Heat-set threaded inserts As a beginner, that felt like jumping into the deep end a little bit too quickly. So I opted for a simpler but effective solution: the [Satsana shroud](https://www.thingiverse.com/thing:4369859). #### Why Satsana? - It uses the stock fans, so no extra costs - Very easy to print - There is a [remix for it that works with my BIQU Microprobe v2](https://www.thingiverse.com/thing:6538262) ![Toolhead with the Satsana shroud and Microprobe v2](/images/2025/starting-my-3d-printing-journey-in-2025-with-the-ender-3/satsana_with_microprobe.jpg) Unfortunately, I forgot to print a Benchy for a before/after comparison of the actual print quality difference, but there are a lot of examples on the Satsana Thingiverse page that show the improvements people have seen. My prints have definetly improved, but I don't have a side by side comparison of before and after. ### My Second Ender 3: The Plot Twist A funny thing happened just after I got my Ender 3. My younger brother, who had been eyeing 3D printers too, also found an original Ender 3 on Gumtree, he managed to get it for £50 as well. His printer did not come with any filament, but it did come with a bunch of upgrades, including: - BTT SKR Mini E3 v2 - Dual Z-axis motors - BLTouch probe - MicroSwiss direct drive extruder and all-metal hotend - Upgraded bed springs But there was a catch: despite the premium upgrades, he battled constant adhesion issues, and many other issues. Frustrated, he upgraded to a Bambu Labs A1 for "plug-and-play" simplicity—and handed me his troubled Ender 3. #### Diagnosing the Nightmare Printer At first, I replicated his struggles — nothing would stick to the glass bed. I think the coating on the bed is damaged some how. Even after swapping out the bed, I ran into a lot of issues. A full toolhead teardown revealed why: - Near-severed thermistor wire from an overtightened screw - Burnt heating cartridge cables - Heat block caked in burnt filament The hotend was a disaster. Whether due to temperature misreporting or uneven heating, it was clearly unsafe due to the burnt and almost broken wires. My attempts to salvage the heat block only made things worse, so it was time for replacements. #### The Hotend Resurrection Kit I ordered upgrades to fix *and* future-proof both printers: - [Triangle-Lab 50W Heater Cartridge](https://s.click.aliexpress.com/e/_oDxItx9) - 25% more power than stock (40W → 50W) for faster heat-up. - [ATC Semitec 104GT-2 Thermistor](https://s.click.aliexpress.com/e/_oEmkP7V) - Higher accuracy, faster response, and 300°C tolerance (vs. stock 260°C). - [Mellow Heat Block + Silicone Sock](https://s.click.aliexpress.com/e/_ond5jyn) - Fresh start with thermal consistency. - [Creality Hardened Steel Nozzle](https://s.click.aliexpress.com/e/_opYcJYP) - Supports more fillament types, especially those that are a bit abrasive. I decided to buy two of each of these, so I can also upgrade my original printer as well. As a bonus, I grabbed a [Triangle-Lab All-Metal Hotend](https://s.click.aliexpress.com/e/_oD2pYRh)) for my original Ender 3 — curious how it'll compare to the MicroSwiss on the second Ender 3. I'm still awaiting delivery of these parts, and will make a follow up post when they have arrived and I have installed them. ### Why I Enjoy Tinkering with the Ender 3 While it's very easy to get frustrated with the Ender 3, especially when newer printers "just work," I've really enjoyed the process of learning and tinkering. There's something about working on a machine and improving it piece by piece, kind of like working with LEGO or Meccano as a kid. The Ender 3 is an incredibly versatile platform with endless upgrade options, and Klipper has given me a ton of customization and control. Browsing the Ender 3 subreddit, I saw someone say something similar to: > "If you're not printing upgrades, what *are* you printing?" For me, that's been completely true. My printer has spent a lot of time printing parts to upgrade itself! ### Future Upgrade Plans Looking ahead, I'm planning several upgrades for both printers: - **Linear rails** (X/Y axes) – for buttery-smooth motion - **HeroMe Gen7** – I plan on taking a look at the HeroMe Gen7 again when I have a bit more experience - Magnetic print bed to make it easier to remove prints - **First printer overhaul** to match the second: - Direct drive extruder (for flexible filaments) - Dual Z-axis motors (no more gantry sag) I'm also intrigued by the Switchwire conversion, which turns the Ender 3 into a Voron Switchwire printer. It looks challenging but exciting. ### Final Thoughts The Ender 3 can be incredibly frustrating — there's no sugar-coating that. Compared to modern printers like any of the Bambu Labs printers, it lacks polish, ease of use, and reliability out of the box. Those newer machines are designed to "just work," and they often do. But the Ender 3 shines in a different way. If you're someone who enjoys tinkering, troubleshooting, and learning how things work — the Ender 3 is basically adult LEGO. Or like Meccano, if you're into nuts and bolts. It's a platform that encourages experimentation and rewards patience. The community around it is massive, the upgrade path is nearly endless, and the sense of accomplishment when you finally get things dialed in is hard to beat. So if you're thinking of picking one up in 2025 — especially second-hand — and you've got a DIY streak in you… I can't recommend it enough. If you are looking for filament, I have been using Elegoo PLA+, which I can highly recommend ([UK Store](https://elegoo.sjv.io/c/6223541/2118843/19663) / [US Store](https://elegoo.sjv.io/c/6223541/2001678/19663) / [CA Store](https://elegoo.sjv.io/c/6223541/2001680/19663) / [EU Store)](https://elegoo.sjv.io/c/6223541/1930764/19663)) ### Affiliate Links - [BIGTREETECH SKR Mini E3 V3.0 (Amazon)](https://amzn.to/3Z5erXO) - [BIGTREETECH SKR Mini E3 V3.0 (AliExpress)](https://s.click.aliexpress.com/e/_okZNFcr) - [BIQU Microprobe v2 (AliExpress)](https://s.click.aliexpress.com/e/_oBfTxPl) - [Triangle-Lab 50W Heater Cartridge (AliExpress)](https://s.click.aliexpress.com/e/_oDxItx9) - [ATC Semitec 104GT-2 Thermistor (AliExpress)](https://s.click.aliexpress.com/e/_oEmkP7V) - [Mellow 3D Heat Block + Silicone Sock (AliExpress)](https://s.click.aliexpress.com/e/_ond5jyn) - [Creality Hardened Steel Nozzle (AliExpress)](https://s.click.aliexpress.com/e/_opYcJYP) - [Triangle-Lab All-Metal Hotend (AliExpress)](https://s.click.aliexpress.com/e/_oD2pYRh) - [Mellow 3D ADXL345 Accelerometer (AliExpress)](https://s.click.aliexpress.com/e/_oEQFOkb) -------------------------------------------------------------------------------- title: "How to Redirect Hardcoded DNS with VyOS (Perfect for Pi-hole or Blocky Setups)" date: "2024-03-28" url: https://www.hamzahkhan.com/captive-dns-with-vyos/ -------------------------------------------------------------------------------- Smart devices like Chromecasts and TVs often use hardcoded DNS servers that bypass your custom DNS filters like Pi-hole or Blocky. In this guide, you'll learn how to configure VyOS NAT rules to **intercept and redirect all DNS requests** to your preferred DNS server — even if the client tries to bypass it. I use [Blocky](https://0xerr0r.github.io/blocky/) as my DNS server on my home network, but this should work with Pi-Hole and any other DNS server as well. In order to disable this, I setup a few NAT rules on my [Vyos](https://vyos.io/) router to redirect any DNS queries to unknown DNS servers to my Blocky server. ### Step 1: Define Allowed DNS Servers Start by creating an address group containing the allowed DNS servers. This ensures that legitimate DNS queries are not redirected. ```bash mhamzahkhan@homelab-gw:~$ configure [edit] set firewall group address-group dns-servers address '10.254.95.3' set firewall group address-group dns-servers address '10.254.95.4' ``` ### Step 2: Redirect Unapproved DNS Requests with NAT Next, set up a destination NAT rule to redirect DNS queries not intended for the allowed DNS servers to the Blocky DNS server. ```bash mhamzahkhan@homelab-gw:~$ configure [edit] set nat destination rule 5010 description 'Captive DNS' set nat destination rule 5010 destination group address-group '!dns-servers' set nat destination rule 5010 destination port '53' set nat destination rule 5010 inbound-interface name 'bond1.90' set nat destination rule 5010 protocol 'tcp_udp' set nat destination rule 5010 translation address '10.254.95.4' set nat destination rule 5010 translation port '53' ``` In this example, bond1.90 is my internal home network and 10.254.95.4 is my Blocky DNS server. -------------------------------------------------------------------------------- title: "VyOS as a Reverse Proxy Load Balancer" date: "2023-09-17" url: https://www.hamzahkhan.com/vyos-reverse-proxy-load-balancer/ -------------------------------------------------------------------------------- VyOS, the robust open-source network operating system, has recently introduced an exciting new capability – the ability to function as a load-balancing reverse proxy. This integration leverages the power of HAproxy, a battle-tested proxy server, and load balancer, providing VyOS with powerful reverse proxy and application load balancing functionality. While this integration is still in its early stages, and lacks many features, it presents exciting features that will hopefully improve with time. My particular use case for this feature is to allow me to host services at home, despite being behind CGNAT. In my previous articles, I described how to configure a site-to-site VPN between two VyOS routers. This is effectively how I bypass my ISPs CGNAT. I have the VyOS router that lives in the data centre running HAProxy, and reverse proxying all requests back to home lab. In this article, I will detail the steps to confgure VyOS as a load-balancing reverse proxy. ## Configuring VyOS Lets start with first creating the services, which tell HAProxy which ports to bind to. As I would like to terminate SSL on my home Kubernetes cluster, I have configured HAProxy to run in TCP mode, on both port 80 and 443: ```bash # set load-balancing reverse-proxy service http mode 'tcp' # set load-balancing reverse-proxy service http port '80' # set load-balancing reverse-proxy service https mode 'tcp' # set load-balancing reverse-proxy service https port '443' ``` Next we can define our backends. I define two backends, for HTTP and HTTPs: ```bash # set load-balancing reverse-proxy backend ingress-home-http description 'Home K8S HTTP Ingress' # set load-balancing reverse-proxy backend ingress-home-http mode 'tcp' # set load-balancing reverse-proxy backend ingress-home-http server ingress-home address '10.254.95.0' # set load-balancing reverse-proxy backend ingress-home-http server ingress-home check # set load-balancing reverse-proxy backend ingress-home-http server ingress-home port '80' # set load-balancing reverse-proxy backend ingress-home-http server ingress-home send-proxy-v2 # set load-balancing reverse-proxy backend ingress-home-http timeout check '10' # set load-balancing reverse-proxy backend ingress-home-http timeout connect '5' # set load-balancing reverse-proxy backend ingress-home-http timeout server '180' # set load-balancing reverse-proxy backend ingress-home-https description 'Home K8S HTTPS Ingress' # set load-balancing reverse-proxy backend ingress-home-https mode 'tcp' # set load-balancing reverse-proxy backend ingress-home-https server ingress-home address '10.254.95.0' # set load-balancing reverse-proxy backend ingress-home-https server ingress-home check # set load-balancing reverse-proxy backend ingress-home-https server ingress-home port '443' # set load-balancing reverse-proxy backend ingress-home-https server ingress-home send-proxy-v2 # set load-balancing reverse-proxy backend ingress-home-https timeout check '10' # set load-balancing reverse-proxy backend ingress-home-https timeout connect '5' # set load-balancing reverse-proxy backend ingress-home-https timeout server '180' ``` Not the send-proxy-v2 option. This configures HAProxy to send traffic to the backends using the PROXY protocol, which preserves the client IP address. You can read more about the PROXY protocol on the [HAProxy Blog post titled "Use the Proxy Protocol to Preserve a Client’s IP Address"](https://www.haproxy.com/blog/use-the-proxy-protocol-to-preserve-a-clients-ip-address). I am using Traefik as my ingress service on my home kubernetes cluster, which does support the PROXY protocol. Next, we can direct connect the services to the backends: ```bash set load-balancing reverse-proxy service http backend 'ingress-home-http' set load-balancing reverse-proxy service https backend 'ingress-home-https' ``` Don't forget to apply the configuration using the `commit` command! And that's all there is to it! Your internal services should now be accessible via the public IP address of your VyOS router. ## Conclusion While VyOS' integration with HAproxy is still in its early stages, it holds great promise for those looking to harness its potential. Our exploration began with my use case and goal of overcoming the challenges posed by CGNAT (Carrier-Grade Network Address Translation) to host services from home. We discussed how this feature could be a game-changer, and we detailed the steps to configure VyOS as a load-balancing reverse proxy. As we conclude this journey, it's worth noting that this configuration is only scratching the surface of functionality that HAProxy offers, and while VyOS' integration with HAProxy is still in early stages, there are still more advanced HAProxy features that have been integrated that I have not used in this guide, which you can explore in-depth through the [VyOS documentation](https://docs.vyos.io/en/latest/configuration/loadbalancing/reverse-proxy.html). This article has been a glimpse into the possibilities of VyOS as a load-balancing reverse proxy, and I hope it sets you on a path of innovation and networking excellence. Whether you're a seasoned networking enthusiast or just getting started, VyOS is is an excelent tool to have at your disposal. In the ever-evolving landscape of networking, VyOS continues to be a driving force, and I'm excited to see how this feature evolves in the future. -------------------------------------------------------------------------------- title: "VyOS - WireGuard based Road Warrior VPN Configuration" date: "2023-09-16" url: https://www.hamzahkhan.com/vyos-road-warrior-vpn/ -------------------------------------------------------------------------------- In our modern, hyper-connected world, where remote work and global access are increasingly vital, the need for secure connectivity to your home or office network has evolved from a luxury to an essential requirement. Whether you're a professional in need of remote access to an office network or a passionate home lab enthusiast managing various services, a road-warrior style VPN is your key to top-tier, secure and hassle-free remote server access from anywhere in the world. Regardless of if you are managing a personal web server, delving into home automation experiments, or overseeing your own cloud services, this guide serves as your trusty roadmap, expanding on the principles covered in our previous post about [establishing a site-to-site VPN with WireGuard and VyOS](/vyos-ospf-wireguard 'Site-to-Site VPN using Wireguard and OSPF on VyOS'). We now shift our focus to the individual user's perspective, bridging the geographical gap between your current location and the heart of your network from anywhere in the world. Together, we'll navigate the process of configuring VyOS to function as a WireGuard VPN server, enabling you to access your digital realm with unwavering security and unrivaled ease. Let's dive in and get started! ## Configure the WireGuard Server on VyOS VyOS' command line interface simplifies the configuration of a Wireguard server and makes client configuration a breeze as well. All of the configuration for WireGuard on VyOS is done in the WireGuard interface configuration commands, which are prefixed with `interface wireguard $INTERFACE_NAME`. ### Setup Variables I refer to these variables throughout this guide: - `SERVER_PUBLIC_IP` - This is the server's public IP address - `SERVER_PRIVATE_KEY` - This is the server's private key - This is generated by the `generate pki wireguard key-pair` command - `SERVER_PUBLIC_KEY` - This is the server's public key - This is generated by the `generate pki wireguard key-pair` command - `CLIENT_PRIVATE_KEY` - This is the client's private key - This is generated by the `generate wireguard client-config` command - `CLIENT_PUBLIC_KEY` - This is the client's private key - This is generated by the `generate wireguard client-config` command ### Generate Server Keypair Generate a keypair for the WireGuard server. Make note of these, as you will need these again. ```bash mhamzahkhan@gw:~$ generate pki wireguard key-pair Private key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${SERVER_PRIVATE_KEY} -> Public key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${SERVER_PUBLIC_KEY} -> ``` ## Configure WireGuard Interfaces Next we can configure the WireGuard interface. For I am using the subnet 10.254.254.0/24 for my VPN, but you can use whatever you like. ```bash mhamzahkhan@gw# set interfaces wireguard wg1 address '10.254.254.1/24' mhamzahkhan@gw# set interfaces wireguard wg1 description 'VPN' mhamzahkhan@gw# set interfaces wireguard wg1 ip adjust-mss '1380' mhamzahkhan@gw# set interfaces wireguard wg1 mtu '1420' mhamzahkhan@gw# set interfaces wireguard wg1 port '51920' mhamzahkhan@gw# set interfaces wireguard wg1 private-key '${SERVER_PRIVATE_KEY}' ``` Next, for each device that will connect to the VPN, we need to add a peer definition. VyOS makes this extremely easy, and even generates a QR code which can be scanned to easily configure the WireGuard client on a phone, for example: ```bash mhamzahkhan@gw:~$ generate wireguard client-config hamzah-phone interface wg1 server ${VYOS_SERVER_PUBLIC_ADDRESS} address 10.254.254.2/24 WireGuard client configuration for interface: wg1 To enable this configuration on a VyOS router you can use the following commands: === VyOS (server) configurtation === set interfaces wireguard wg1 peer hamzah-phone allowed-ips '10.254.254.2/32' set interfaces wireguard wg1 peer hamzah-phone public-key '${CLIENT_PUBLIC_KEY}' === RoadWarrior (client) configuration === [Interface] PrivateKey = ${CLIENT_PRIVATE_KEY} Address = 10.254.254.2/32 DNS = 1.1.1.1 [Peer] PublicKey = ${SERVER_PUBLIC_KEY} Endpoint = ${SERVER_PUBLIC_ADDRESS}:51821 AllowedIPs = 0.0.0.0/0, ::/0 █████████████████████████████████████████████████████████████ █████████████████████████████████████████████████████████████ ████ ▄▄▄▄▄ █ ██▀▄█ ▄██▀▀ ▀██▀▀▄▀▄ ▀ ▄█▄▄▀▄█▀▀ ▀██ ▄▄▄▄▄ ████ ████ █ █ █ ███▄█▀ ▄█▀▀ ███▀▀ ▀▄▄▄▄ ▀▀▀▀▀▄▀█ █ █ ████ ████ █▄▄▄█ █▀█ ▄▀▄▄█▄█▀▄ ██▄ ▄▄▄ ▀▄█▀▀█ ▀▄▄ ▄ ███ █▄▄▄█ ████ ████▄▄▄▄▄▄▄█▄▀▄▀▄█ ▀▄▀▄▀▄▀▄▀ █▄█ █▄▀ █▄█ █ █▄▀ █ █▄▄▄▄▄▄▄████ ████▄ █▀ ▄▄▀▀▄▀▀ ▀▄ ▄ ▄ ▄ ▄ ▀▄ ▀▄█▄█▀▄█▄ █▀▀█▄█ ▄▄ ████ ████▀▀██▄▄▄█▄▄▄█▀ █▄ █▀█ █ ▀█▀█▀▄▀▀ ▀ ██▀█▀▀▄▄▄ █▀ ▄▄█ █ ████ ████▄ ▄▀▀▄▄▄▀ ██ ▄▄██▄ ▄█▀▄▄██▄█ ███▀█▀█▀█▄█▀▀██████▀ ████ ████▀ ▄▀▀ ▄▀██▄▀▄███▀▀▄ ▀ ▀ ▀▀ ▀▄█▄▀▀▄██▀ ▀▀ ▀██ ▀▀▀▄▀▄ ████ ████████▄▄▄▄██▄▄▄▄ ▄▄▄█▀ ▄█ ▄ █ ▀▀█▄ █ ▄ ▄██ ▄▀▀█▀ ▀▀█▄████ ████ ▀▄ ▄▄█▄ ▀ ▄ ▄▄██▄ ▀▄▀█▄▄▄█▄ █▀█▄▄ ▄██▄▄ ▀▀█▄▄██▄████ ████ ▀█▄▄█▄▀▄▄ █ █▄▀▀▀ ▀ ▀█▄█▀█▄▄█▄ ▄▀█▀ █▀▀▄█ ▀▄▀█ █▀█ ████ ████▀▄█ ▀ ▄▄▀▀ █▄█ ▄ ██ ▀ ▄ ▀▄ █▄▄█ ▀ ▀▄▄▀█ ▄█ ▀▄█▀█▄ ████ ████▀▄ ▄▄▄ ▀▀ █ ▀█ ▄ ▄▄ ▄▄▄ █▀▀▄▀▄ █▀ █▄ ▄▄▄ ▄▀ █████ █████▀██ █▄█ █ ▀ █▄ ▄ █▀▄▀▀█ █▄█ █▄██▀▀▄▀▀█▄▀ ▄ █▄█ █▄▀▄████ █████ █▀ ▄▄ ▄▄ ▄▄▄▄█▀ ▄ ▄▀▀▄▄ █▄ ██▄▀▀ ▄█ ▄ ▀▄▄ █▀█▄████ ████▀▄ ▀█▄▄▀▄█▄▀ ▄ █▀▀▄▀█▀█▄▄█▀▀▀█▄ ▄ ██▀▀ ▄▀ ▄▀█▀▄██ █ █████ ████▄█▄ ▄▄▄▀ ▀▄▀▀▀ █▄▄▄█▄ ▀▀▄██ ▀▀▄▀█ ▄ █▀ █▀ ▀▄▄█▀▄▄████ ████▄▀▄▀ ▄█▀█ ▄▄█▀ ▀ ████ ██▄▀▀██▀█▀▀▀▀▄█ █ ▀ ▀▄▀▄▀█▀ ▄████ ████▄▀ ▄█▄▀█▄▀▀▀▄█▄▀▀▀▄ ███ ▄█▄ ▄▀ ██ █ ▄█▄█▀ ▄▀▄▀▀▀▀█ ████ ███████▄ ▄█ ▄█▄ ▀█ ▄ █▄█▀█ █▀▄▀ █▄▀█▀▄ ██▀ ▀██▄▀▄▀▄▄ ████ ████▄█▀▀█ ▄ ▀▀▀ ▄ ▀▄ █▄▄▀ █▄▀ █ █▄ █▀▄█ █▀ █▄▄▄█ ▀█▄████ ████▄ ▀▄▄▄▄▀████▄▀▀▄█ ██▄█ ▄▄▄ ▄▀▀ ▄▀ █▄▀██▀▄▄█▀ ▄█ ▄▄▀▄ ████ ███████▄██▄▄▀ ▄▄ █▄█▀ ▀ ▀ ▄▄▄ █▀▄▀█▀▀ ▀▄▀▀█ ▄ ▄▄▄ ▄▀▀▀████ ████ ▄▄▄▄▄ █▀▄ █ █▀▀▄▀▀ █▀ █▄█ ▀█▀▀▀▄▀▀ ▄ ▀█ █ █▄█ ▀▄ █████ ████ █ █ █▄▀█▄▄▄▄ █▄▄▀▄▄▄█ ▄▀▀ ▄ █▄▄ ▀ █ ▄ ▄▄▄▄▀▀█████ ████ █▄▄▄█ █▀ ▀▀▀ ▄█▀▄ ▄ ███ ██ ▄▄▀▄▄▄█▀ █▀▄▀██▄▀▀ ████ ████▄▄▄▄▄▄▄█▄██▄▄██▄██████▄▄▄█████▄▄▄▄██▄▄██▄▄▄█▄█▄█▄██▄█████ █████████████████████████████████████████████████████████████ █████████████████████████████████████████████████████████████ ``` If you are configuring the client on a phone, using the QR code makes it increcibly easy to configure the client, alternatively, configuring the Mac OS X client allows you to just copy and paste in the client conifuration above the QR code. ## Conclusion As we conclude our journey through configuring VyOS as a WireGuard VPN server, you now possess a fully functional WireGuard VPN setup, empowering you to securely access your self-hosted digital resources from anywhere on the planet. In our ever-evolving, interconnected world, the demand for secure, remote network access remains as vital as ever. By utilising WireGuard and VyOS, you have armed yourself with the ability to stay seamlessly connected to your internal services and servers, whether you're managing a personal web server, experimenting with home automation, or trying to access secure files on your office network. In my next post, I will be discussing how I use WireGuard to allow me to host services in my home lab, despite being behind CGNAT. -------------------------------------------------------------------------------- title: "VyOS - Site-to-Site VPN using Wireguard and OSPF" date: "2023-09-07" url: https://www.hamzahkhan.com/vyos-ospf-wireguard/ -------------------------------------------------------------------------------- Connecting two sites securely and efficiently is essential for many businesses and individuals. In this post, we'll explore how to achieve seamless connectivity between two locations using the powerful combination of WireGuard, a modern and high-performance VPN protocol, and VyOS, a robust and versatile network operating system. Whether you're looking to enhance communication between remote offices, create a secure link between your data center and a cloud-based infrastructure, or simply want to connect two geographically separated sites, this guide will walk you through the process, ensuring a reliable and secure connection every step of the way. To illustrate this process, I will use my own use case as an example. I manage equipment hosted in a colocation data center, which I affectionately refer to as my 'colo-lab', and I also maintain a 'home-lab'. Previously, I relied on GRE over IPsec for connectivity between the two sites, but I've recently migrated these over to WireGuard. WireGuard boasts a slew of compelling advantages over traditional IPsec, including speed, security, and a refreshingly straightforward setup. Its minimalist design significantly simplifies the configuration process, especially when compared to the complexity of GRE over IPsec. Throughout this post, I'll walk you through the precise steps I took to configure two VyOS routers to seamlessly integrate with WireGuard while enabling efficient route distribution through OSPF. By the end, you'll be equipped with the knowledge to configure your own WireGuard based site-to-site VPN. ## Topology ### Colo Lab - WireGuard Interface IP: 10.254.2.0/31 - Internal Networks: - 10.254.112.0/24 - 10.254.113.0/24 - 10.254.114.0/24 - Internal Network Aggregate: 10.254.112.0/21 - Public IP: Refered to as `${COLO_LAB_PUBLIC_IP}` ### Home Lab - WireGuard Interface IP: 10.254.2.1/31 - Internal Networks: - 10.254.88.0/24 - 10.254.89.0/24 - 10.254.90.0/24 - Internal Network Aggregate: 10.254.88.0/21 - Public IP: None (It's behind CGNAT) ## Generate Keypairs First things first, let's generate keypairs for both routers. Make note of these, and keep them safe. First the cololab router: ```bash mhamzahkhan@cololab-gw:~$ generate pki wireguard key-pair Private key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${COLOLAB_PRIVATE_KEY} -> Public key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${COLOLAB_PUBLIC_KEY} -> ``` Then the homelab router: ```bash mhamzahkhan@homelab-gw:~$ generate pki wireguard key-pair Private key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${HOMELAB_PRIVATE_KEY} -> Public key: <- OMITTED - USE YOUR OWN ONE - I will refer to this as ${HOMELAB_PUBLIC_KEY} -> ``` ## Configure WireGuard Interfaces Next, let's set up the WireGuard interfaces. For these interfaces, I've chosen a private /31 range, which gives us precisely two IP addresses, perfect for a point-to-point link. In my example, we'll use 10.254.2.0/31 and 10.254.2.1/31. ### Colo Lab Router WireGuard Configuration Please note that because my home lab's internet connection is behind CGNAT, I haven't specified the peer address on the Colo Lab router. This means that the connection will be initiated from the home-lab side. If you have a static IP address (or dynamic IP address that doesn't change much), it would be a good idea to specify the peer address so the connection can be initiated from either side. ```bash mhamzahkhan@cololab-gw:~$ configure [edit] set interfaces wireguard wg0 address '10.254.2.0/31' set interfaces wireguard wg0 description 'Connection to Home-Lab' set interfaces wireguard wg0 ip adjust-mss '1380' set interfaces wireguard wg0 mtu '1420' set interfaces wireguard wg0 peer home-lab allowed-ips '0.0.0.0/0' set interfaces wireguard wg0 peer home-lab persistent-keepalive '10' set interfaces wireguard wg0 peer home-lab public-key '${HOMELAB_PUBLIC_KEY}' set interfaces wireguard wg0 port '51820' set interfaces wireguard wg0 private-key '${COLOLAB_PRIVATE_KEY}' ``` ### Home Lab Router WireGuard Configuration ```bash mhamzahkhan@homelab-gw:~$ configure [edit] set interfaces wireguard wg0 address '10.254.2.1/31' set interfaces wireguard wg0 description 'Connection to Colo-Lab' set interfaces wireguard wg0 ip adjust-mss '1380' set interfaces wireguard wg0 mtu '1420' set interfaces wireguard wg0 peer colo-lab address '${COLO_LAB_PUBLIC_IP}' set interfaces wireguard wg0 peer colo-lab allowed-ips '0.0.0.0/0' set interfaces wireguard wg0 peer colo-lab persistent-keepalive '10' set interfaces wireguard wg0 peer colo-lab port '51820' set interfaces wireguard wg0 peer colo-lab public-key '${COLOLAB_PUBLIC_KEY}' set interfaces wireguard wg0 port '51820' set interfaces wireguard wg0 private-key '${HOMELAB_PRIVATE_KEY}' ``` ## Test WireGuard connection At this point, both routers should be able to ping each other via the VPN link: ```bash mhamzahkhan@cololab-gw:~$ ping 10.254.2.1 count 4 PING 10.254.2.1 (10.254.2.1) 56(84) bytes of data. 64 bytes from 10.254.2.1: icmp_seq=1 ttl=64 time=0.339 ms 64 bytes from 10.254.2.1: icmp_seq=2 ttl=64 time=0.382 ms 64 bytes from 10.254.2.1: icmp_seq=3 ttl=64 time=0.344 ms 64 bytes from 10.254.2.1: icmp_seq=4 ttl=64 time=0.347 ms --- 10.254.2.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3106ms rtt min/avg/max/mdev = 0.339/0.353/0.382/0.017 ms mhamzahkhan@homelab-gw:~$ ping 10.254.2.0 count 4 PING 10.254.2.0 (10.254.2.0) 56(84) bytes of data. 64 bytes from 10.254.2.0: icmp_seq=1 ttl=64 time=0.290 ms 64 bytes from 10.254.2.0: icmp_seq=2 ttl=64 time=0.227 ms 64 bytes from 10.254.2.0: icmp_seq=3 ttl=64 time=0.404 ms 64 bytes from 10.254.2.0: icmp_seq=4 ttl=64 time=0.380 ms --- 10.254.2.0 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3078ms rtt min/avg/max/mdev = 0.227/0.325/0.404/0.070 ms ``` To gauge the bandwidth between our networks, we can use iPerf3. First start start iPerf3 in server mode on either side of the VPN. I'm running it on the colo lab router: ```bash mhamzahkhan@cololab-gw:~$ iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- ``` Next, start iPerf3 on the home lab router. Let's start with an upload bandwidth test from the home-lab router to the colo-lab router: ```bash mhamzahkhan@homelab-gw:~$ iperf3 -c 10.254.2.0 Connecting to host 10.254.2.0, port 5201 [ 5] local 10.254.2.1 port 33008 connected to 10.254.2.0 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 20.8 MBytes 174 Mbits/sec 99 207 KBytes [ 5] 1.00-2.00 sec 20.7 MBytes 174 Mbits/sec 0 269 KBytes [ 5] 2.00-3.00 sec 19.8 MBytes 166 Mbits/sec 131 194 KBytes [ 5] 3.00-4.00 sec 22.1 MBytes 185 Mbits/sec 0 263 KBytes [ 5] 4.00-5.00 sec 17.3 MBytes 145 Mbits/sec 195 18.7 KBytes [ 5] 5.00-6.00 sec 16.4 MBytes 137 Mbits/sec 63 224 KBytes [ 5] 6.00-7.00 sec 19.9 MBytes 167 Mbits/sec 95 168 KBytes [ 5] 7.00-8.00 sec 11.3 MBytes 95.2 Mbits/sec 123 123 KBytes [ 5] 8.00-9.00 sec 18.9 MBytes 158 Mbits/sec 0 202 KBytes [ 5] 9.00-10.00 sec 20.2 MBytes 169 Mbits/sec 35 207 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 187 MBytes 157 Mbits/sec 741 sender [ 5] 0.00-10.01 sec 186 MBytes 156 Mbits/sec receiver iperf Done. ``` I'm not sure why there are retransmissions. I still need to investigate that, but it's maxing out my home connection upload. Now, let's reverse the test, with the colo-lab router sending data to the home-lab router. Use the -R flag for this: ```bash mhamzahkhan@homelab-gw:~$ iperf3 -c 10.254.2.0 -R Connecting to host 10.254.2.0, port 5201 Reverse mode, remote host 10.254.2.0 is sending [ 5] local 10.254.2.1 port 52016 connected to 10.254.2.0 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 14.8 MBytes 124 Mbits/sec [ 5] 1.00-2.00 sec 17.4 MBytes 145 Mbits/sec [ 5] 2.00-3.00 sec 17.6 MBytes 148 Mbits/sec [ 5] 3.00-4.00 sec 15.5 MBytes 130 Mbits/sec [ 5] 4.00-5.00 sec 16.3 MBytes 137 Mbits/sec [ 5] 5.00-6.00 sec 12.2 MBytes 102 Mbits/sec [ 5] 6.00-7.00 sec 9.33 MBytes 78.3 Mbits/sec [ 5] 7.00-8.00 sec 7.86 MBytes 65.9 Mbits/sec [ 5] 8.00-9.00 sec 14.7 MBytes 124 Mbits/sec [ 5] 9.00-10.00 sec 15.3 MBytes 128 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.01 sec 142 MBytes 119 Mbits/sec 282 sender [ 5] 0.00-10.00 sec 141 MBytes 118 Mbits/sec receiver iperf Done. ``` Some tuning may be needed, but for now, these numbers should suffice. ## Configure OSPF Now, let's dive into OSPF configuration. Note that I use OSPF route summarization, which means we summarize individual subnets on each side into a single summary route, simplifying the routing table. ### Colo Lab Router OSPF Configuration ```bash set protocols ospf area 0.0.0.0 network '10.254.2.0/31' set protocols ospf area 0.0.0.1 network '10.254.112.0/24' set protocols ospf area 0.0.0.1 network '10.254.113.0/24' set protocols ospf area 0.0.0.1 network '10.254.114.0/24' set protocols ospf area 0.0.0.1 range 10.254.112.0/21 set protocols ospf interface eth0 passive set protocols ospf log-adjacency-changes set protocols ospf parameters router-id '10.254.2.0' ``` ### Home Lab Router OSPF Configuration ```bash set protocols ospf area 0.0.0.0 network '10.254.2.0/31' set protocols ospf area 0.0.0.1 network '10.254.88.0/24' set protocols ospf area 0.0.0.1 network '10.254.89.0/24' set protocols ospf area 0.0.0.1 network '10.254.90.0/24' set protocols ospf area 0.0.0.1 range 10.254.88.0/21 set protocols ospf interface eth0 passive set protocols ospf log-adjacency-changes set protocols ospf parameters router-id '10.254.2.1' ``` And magically your routes should be in your routing table! ### Colo Lab Router Verification ```bash mhamzahkhan@cololab-gw:~$ show ip route 10.254.88.0 Routing entry for 10.254.88.0/21 Known via "ospf", distance 110, metric 2, best Last update 11:58:19 ago * 10.254.2.1, via wg0, weight 1 ``` ### Home Lab Router Verification ```bash mhamzahkhan@homelab-gw:~$ show ip route 10.254.88.0 Routing entry for 10.254.112.0/21 Known via "ospf", distance 110, metric 2, best Last update 12:00:02 ago * 10.254.2.0, via wg0, weight 1 ``` ## Conclusion With the successful implementation of WireGuard VPN and OSPF routing, your two sites can now seamlessly communicate, marking a significant step in enhancing your network capabilities. While this guide has laid a solid foundation for your site-to-site VPN, there's more to explore and build upon in future configurations. In my next post, we will discuss configuring a VyOS-based WireGuard VPN for road-warrior style clients. This will enable secure remote access to your network, allowing you to connect from virtually anywhere with an internet connection. I will guide you through the setups, ensuring you have the tools to establish a secure and efficient network for remote users. Stay tuned for this next installment, where we continue to harness the power of WireGuard and VyOS to expand the horizons of your network. Elevate your connectivity and security to new heights, and don't miss out on future updates and valuable networking insights—subscribe and stay connected! -------------------------------------------------------------------------------- title: "Using FreeIPA CA as an ACME Provider for cert-manager" date: "2022-07-27" url: https://www.hamzahkhan.com/using-freeipa-ca-as-an-acme-provider-for-cert-manager/ -------------------------------------------------------------------------------- I'm using [FreeIPA](https://www.freeipa.org/) for authentication services in my home lab. It's extreme overkill for my situation, as I don't have many users (mainly just me!) but alas I like overkill. :) I am using FreeIPA's DNS service to host some DNS subdomains for internal services. The way I have configured these subdomains is through DNS delegations, but since my IPA servers are not accessible from the internet, it breaks both the HTTP-01 and DNS-01 verification challenges from [LetsEncypt's](https://letsencrypt.org/). Yesterday evening, I was playing around with [TrueCommand](https://www.truenas.com/truecommand/) and have it hosted on one of my IPA internal domains, but as I cannot use LetsEncrypt to issue a certificate for it, I decided to use the CA built into FreeIPA since it supports ACME as well. As all the machines that will need to use the service are enrolled into IPA already, the CA certificate for IPA is also installed on those nodes, meaning any certificate issues by FreeIPA are automatically trusted. To get this to work, I had to first enable ACME support from within FreeIPA: ```bash [root@ipa-server ~]# ipa-acme-manage enable ``` FreeIPA's ACME service supports both HTTP-01 and DNS-01 challenges, but I generally prefer DNS-01. For cert-manager to add the \_acme-challenge DNS record to FreeIPA, we can use cert-manager's RFC-2136 provider. To do this, we must create a new TSIG key on our IPA server: ```bash [root@ipa-server ~]# tsig-keygen -a hmac-sha512 acme-update >> /etc/named/ipa-ext.conf [root@ipa-server ~]# systemctl restart named-pkcs11.service ``` Enable dynamic updates for the IPA DNS subdomain: ```bash [root@ipa-server ~]# ipa dnszone-mod k8s.intahnet.co.uk --dynamic-update=True --update-policy='grant acme-update wildcard * ANY;' ``` Next, I had to modify my cert-manager installation slightly to include my own CA certificate bundle, which includes my IPA CA cert. To do this I had to first create the bundle, and then create a Kubernetes ConfigMap for it: ```bash [mhamzahkhan@laptop ~]# cat /etc/ipa/ca.crt > ca-certificates.crt [mhamzahkhan@laptop ~]# kubectl -n cert-manager create configmap ca-bundle --from-file ca-certificates.crt ``` {{% notice info %}} If the machine you are using is enrolled in the IPA domain, you could also just use /etc/pki/tls/certs/ca-bundle.crt, which is actually what I did since it contains all the other CA certificates that cert-manager may need (for example the ISRG Root X1 CA certificate, which is needed so cert-manager can properly access the LetsEncrypt ACME servers). {{% /notice %}} Next, I had to modify the cert-manager deployment to make use of the ca-bundle. As I am using the cert-manager helm chart, this was quite easy. I added the following to my cert-manager helm values file: ```yaml --- volumes: - name: ca-bundle configMap: name: ca-bundle volumeMounts: - name: ca-bundle mountPath: /etc/ssl/certs/ca-certificates.crt subPath: ca-certificates.crt readOnly: false ``` Once this has been deployed, we can need to create a secret in Kubernetes for the TSIG key. Grab the TSIG key we generated earlier from your IPA server (/etc/named/ipa-ext.conf), and create a Kubernetes secret with it: ```bash [mhamzahkhan@laptop ~]# kubectl -n cert-manager create secret generic ipa-tsig-secret --from-literal=tsig-secret-key="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" ``` Next, add a new ClusterIssuer for IPA's ACME service: ```yaml --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: ipa namespace: cert-manager spec: acme: email: admin@ipa.intahnet.co.uk server: https://ipa-ca.ipa.intahnet.co.uk/acme/directory privateKeySecretRef: name: ipa-issuer-account-key solvers: - dns01: rfc2136: nameserver: 10.0.0.22 tsigKeyName: acme-update tsigAlgorithm: HMACSHA512 tsigSecretSecretRef: name: ipa-tsig-secret key: tsig-secret-key selector: dnsZones: - 'k8s.intahnet.co.uk' ``` Now you should be set to request certificates! ```yaml --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: truecommand-certificate namespace: default spec: commonName: 'truecommand.k8s.intahnet.co.uk' dnsNames: - truecommand.k8s.intahnet.co.uk issuerRef: name: ipa kind: ClusterIssuer privateKey: algorithm: RSA encoding: PKCS1 size: 4096 secretName: truecommand-tls ``` All working: ```bash [mhamzahkhan@laptop ~]# kubectl get certificate NAME READY SECRET AGE truecommand-certificate True truecommand-tls 23s [mhamzahkhan@laptop ~]# kubectl get secrets NAME TYPE DATA AGE truecommand-certificate-q8qkh kubernetes.io/tls 2 29s ``` It's a very similar process to use ExternalDNS with FreeIPA as ExternalDNS also supports RFC2136. I have not set this up yet, but the process is described in this excellent blog post: [How to set up Dynamic DNS on FreeIPA for your Kubernetes Cluster](https://astrid.tech/2021/04/18/0/k8s-freeipa-dns/). -------------------------------------------------------------------------------- title: "Playing with Mastodon, the Open Source, Federated Social Network" date: "2019-01-26" url: https://www.hamzahkhan.com/playing-with-mastodon-the-open-source-federated-social-network/ -------------------------------------------------------------------------------- I recently started playing with Mastodon, an open source, Twitter-like social network. In the past, I've looked at StatusNet (now known as GNU Social), but at the time it did not seem very intuitive, and had a number of problems which I cannot remember any more. So far I have been using Mastodon for almost a month, and while the community is very small, I'm finding myself using it more than I do Twitter (or any other social media platform for that matter). Mastodon is a federated social network, meaning unlike Twitter, Facebook or Instagram, anyone can run their own instance and be able to interact with users on other instances. Mastodon is not the only federated social network, there are a number of others which collectively make up the "Fediverse". Mastodon's web interface has a multi-column layout very similar to TweetDeck, and is easy enough for most people to quickly get used to. There are also a number of very well made Android and iOS apps, for example, [Tusky](https://tuskyapp.github.io/). As Mastodon uses the ActivityPub protocol, it can also talk to a number of other social networks which also use ActivityPub such as PixelFed, and PeerTube. PixelFed is an image sharing platform, which will remind you of Instagram, and also has a Tumblr like interface coming up. I haven't used PixelFed, as it is currently undergoing heavy work and has a lot of missing features. PeerTube is a decentralised video hosting network. Again, I haven't used PeerTube much as I didn't really find any interesting content on it. That said I will be keeping an eye on both projects to see how they progress. Collectively, these federated social networks make up the Fediverse and the number of running instances and users is growing quite quickly: [https://the-federation.info/](https://the-federation.info/). I am running my own Mastodon instance on [Mastodon@intahnet.co.uk](https://intahnet.co.uk), which is free for anyone to register on, or if you are already on Mastodon please follow me! [@mhamzahkhan@intahnet.co.uk](https://intahnet.co.uk/@mhamzahkhan). If you are running you own Mastodon instance, please feel free to also subscribe to my ActivityPub relay: [https://relay.intahnet.co.uk/](https://relay.intahnet.co.uk/). -------------------------------------------------------------------------------- title: "Automate Athan Prayer Times on Google Home with Home Assistant" date: "2018-12-12" url: https://www.hamzahkhan.com/prayer-time-notifications-on-google-home-via-home-assistant/ -------------------------------------------------------------------------------- {{% notice update %}} I recently migrated my blog from [WordPress](https://wordpress.org/) to [Hugo](https://gohugo.io/). Due to this migration, the comments that were originally on this post are not present. I hope to migrate them over soon.. {{% /notice %}} {{% notice info %}} I've had quite a lot of messages for help with getting this working. The best place to reach me would be via this Matrix room: [#hamzahs-chat:intahnet.co.uk](https://matrix.to/#/#hamzahs-chat:intahnet.co.uk). Please use the Matrix room and avoid using my Instagram, LinkedIn etc. {{% /notice %}} I have a Google Home which I have been using for various things as I very slowly build my collection of "smart" devices. One thing I was very interested in making my Google Home do is to have the Athan play when it is time for prayer. Unfortunately, there isn't any native way to do this with a Google Home at the moment. I have seen people do it using IFTTT, but as I am already using [Home Assistant](https://www.home-assistant.io/) as my automation platform, I wanted to keep everything within it. What is very interesting about doing it using Home Assistant is that while I can get the basic functionality of the Athan playing, I can also perform other automation that may be useful. For example, I can switch off or pause what ever is on the TV, switch on the lights on dimly for Fajr prayer, even maybe switch on the [Ambi Pur 3volution](https://amzn.to/2UEKR8J) air freshener that I have plugged into an [ESPHome](https://esphome.io/) flashed [Sonoff S20](https://amzn.to/2SGl4ve), so my flat smells nice during salah time! The way I have implemented this is as follows: - Add a REST sensor which fetches the Athan time using the [Al Adhan Service's API](https://aladhan.com/) - Add template which extracts the timings for each prayer - Create an automation to play the Athan I haven't put my Home Assistant configuration on GitHub, so I'll put it all here for now in case anyone else wants to do something similar. ```yaml sensor: - platform: rest name: 'Prayer Times' json_attributes: - data resource: 'http://api.aladhan.com/v1/timings?latitude=52.587904&longitude=-0.1458179&method=3' value_template: '{{ value_json["data"]["meta"]["method"]["name"].title() }}' scan_interval: 86400 - platform: template sensors: fajr: friendly_name: 'Fajr Prayer Time' value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Fajr"] | timestamp_custom("%H:%M") }}' dhuhr: friendly_name: 'Dhuhr Prayer Time' value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Dhuhr"] | timestamp_custom("%H:%M") }}' asr: friendly_name: 'Asr Prayer Time' value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Asr"] | timestamp_custom("%H:%M") }}' magrib: friendly_name: 'Magrib Prayer Time' value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Maghrib"] | timestamp_custom("%H:%M") }}' isha: friendly_name: 'Isha Prayer Time' value_template: '{{ states.sensor.prayer_times.attributes.data.timings["Isha"] | timestamp_custom("%H:%M") }}' automation: - alias: 'Fajr Athan' initial_state: true hide_entity: true trigger: - condition: template value_template: '{{ states.sensor.time.state == states("sensor.fajr") }}' action: - service: media_player.volume_set data_template: entity_id: media_player.living_room_speaker volume_level: 0.75 - service: media_player.play_media data: entity_id: media_player.living_room_speaker media_content_id: https://s3.intahnet.co.uk/athan/fajr.mp3 media_content_type: audio/mp3 - alias: 'Athan' initial_state: true hide_entity: true trigger: - platform: template value_template: '{{ states.sensor.time.state == states("sensor.dhuhr") }}' - platform: template value_template: '{{ states.sensor.time.state == states("sensor.asr") }}' - platform: template value_template: '{{ states.sensor.time.state == states("sensor.maghrib") }}' - platform: template value_template: '{{ states.sensor.time.state == states("sensor.isha") }}' action: - service: media_player.volume_set data_template: entity_id: media_player.living_room_speaker volume_level: 0.75 - service: media_player.play_media data: entity_id: media_player.living_room_speaker media_content_id: https://s3.intahnet.co.uk/athan/normal.mp3 media_content_type: audio/mp3 ``` This is just a basic automation that sets the volume and plays the Athan. I will expand this so that it only plays the Athan when someone is home, and use input booleans so it can be disabled if needed (for example, during Ramadan when we switch on Islam Channel for the Maghrib Athan). Now that I think of it, it's also possible to make Home Assistant automatically switch the TV on, and over to Islam Channel during Ramadan! One annoyance I have found is that before anything is casted to the Google Home, it makes a "blomp" sound, which it does before casting anything. Unfortunately, there isn't any way to disable this, but there are some [tricks](https://community.home-assistant.io/t/i-did-it-i-defeated-the-horrible-google-home-cast-start-prompt-sound/36123) on the Home Assistant forums which allow you to get around it. I opted to just live with it as the using those methods would require Home Assistant to keep the casting connection active, and in doing so, would stop the Google Home from entering its low power sleep mode. I hope this is helpful for anyone else trying to achieve something similar. -------------------------------------------------------------------------------- title: "Growing Date Palms from Seed" date: "2016-11-21" url: https://www.hamzahkhan.com/growing-date-palms-seed/ -------------------------------------------------------------------------------- Recently my auntie gave me some Ajwah date fruit she got while she was in Medina in Saudi Arabia. I absolutely love dates and have always heard that dates have a lot of health benefits. While I was enjoying my dates, I decided to Google what the health benefits actually are. Somehow, I came across an article and discovered that it's actually possible to grow date palms indoors using the seeds. I'm not sure why, but the thought hadn't crossed my mind that they are grown from seed. Some people have even managed to have some success in crappy weather like we have in the UK. While they didn't really get a nice big, beautiful date palm tree, they did get some nice looking (but small) palms trees. I started to wonder if I could grow some date palms in my small flat. My curiosity got the better of me, and I decided to give it a go and see what happens. It could be a small fun side experiment. 🙂 Living in London, I don't expect them to live very long if it even works. The weather isn't really suited for growing date palm trees especially as I am starting at the beginning of November while it's already cold, and going to get colder, but I still want to try to see how it goes. I started by gathered a bunch of date seeds. I am using the seeds from the Ajwah dates I received from my auntie, and seeds from Jordanian Medjool dates I bought from a local grocery store. Medjool dates are absolutely delicious. They are large and very sweet. Ajwah dates aren't as big or as sweet as Medjool dates, but they are still extremely good. Reading on various pages, and watching YouTube videos, I soaked the seeds in water for around one week, changing the water every day to avoid the growth of mould. I don't really know much about plants, or gardening or anything like that, but I think this is done to soften the outer shell and speed up germination time, and dissolve away any remaining fruit and sugars. After one week, I put the seeds into a damp kitchen towel and put them in a cheap plastic food container, and put it on top of my water boiler where it should stay quite warm. I forgot to take photos before I started, but if the seeds start to grow roots, I will make a secondary post with updates and photos. 🙂 -------------------------------------------------------------------------------- title: "Cisco ASA 9.2 on Cisco ASA 5505 with Unsupported Memory Configuration Fail" date: "2015-06-03" url: https://www.hamzahkhan.com/cisco-asa-9-2-cisco-asa-5505-unsupported-memory-configuration-fail/ -------------------------------------------------------------------------------- {{% notice update %}} 16th November 2015 - It looks like it now works. I am currently running asa924-2-k8.bin on my 5505s, with my 1GB sticks of RAM, and it hasn't complained! 🙂 {{% /notice %}} The Cisco ASA 5505 officially supports a maximum of 512MB RAM. Last year I wrote a post detailing a small experiment I done where I [upgrade both my Cisco ASA 5505s]({{< relref "2013-01-20-cisco-asa-5505-ram-upgrade.md" >}}) to use 1GB sticks of RAM, double the officially supported value. Since then, it has worked great and both boxes have been chilling out in my rack, but recently Cisco released ASA 9.2. The full list of new features and changes can be read in the [release notes](http://www.cisco.com/c/en/us/td/docs/security/asa/asa92/release/notes/asarn92.html), but the feature I was most excited about was BGP support being added. The ASA has had OSPF support for some time, but it was lacking BGP, which I always thought was a feature it should have. Now that it has been added, I was quite excited to play with it! So I grabbed the latest 9.1 image (`asa921-k8.bin`), and dropped it on both my ASAs. Switched the bootloader configuration to load the new image. Next I reloaded the secondary device, and waited for it to come back up. Half an hour later, nothing. So I connected a serial cable to see what's up, and to my surprise I find that it not doing anything. It's just stuck saying: ```plain Loading disk0:/asa921-k8.bin... ``` Initially I wasn't really sure what was causing this, so I tried switching out the RAM and putting the stock 512MB stick that I got with the box, and magic! It worked. I'm quite disappointed that my 1GB sticks won't work with 9.2, but it's not a huge loss. My Cacti graphs I only use around 300MB anyway! ![Memory Usage on my Cisco ASA 5505s](/images/2014/cisco-asa-9-2-cisco-asa-5505-unsupported-memory-configuration-fail/graph_image.png) I'm going to have to buy a 512MB stick for my secondary ASA, as now they refuse to be in a failover configuration due to having different software versions and different memory sizes. Alternatively, I'm thinking of just replacing these boxes with something else. My ISP (Virgin Media) will be upgrading my line to 152Mbit/s later this year. The ASA 5505 only has 100Mbit ports so I will be losing 52Mbits! I don't want that, so I'll have to get something faster. I'll probably either go with just a custom Linux box with IPtables, or maybe a virtual ASA now that Cisco offers that! 🙂 -------------------------------------------------------------------------------- title: "Securing Your Postfix Mail Server with Greylisting, SPF, DKIM and DMARC and TLS" date: "2014-02-08" url: https://www.hamzahkhan.com/securing-postfix-mail-server-greylisting-spf-dkim-dmarc-tls/ -------------------------------------------------------------------------------- A few months ago, while trying to debug some SPF problems, I came across ["Domain-based Message Authentication, Reporting & Conformance" (DMARC)](http://www.dmarc.org/). DMARC basically builds on top of two existing frameworks, [Sender Policy Framework (SPF)](http://www.openspf.org/), and [DomainKeys Identified Mail (DKIM)](http://www.dkim.org/). SPF is used to define who can send mail for a specific domain, while DKIM signs the message. Both of these are pretty useful on their own, and reduce incoming spam significantly, but the problem is you don't have any "control" over what the receiving end does with email. For example, company1's mail server may just give the email a higher spam score if the sending mail server fails SPF authentication, while company2's mail server might outright reject it. DMARC gives you finer control, allowing you to dictate what should be done. DMARC also lets you publish a forensics address. This is used to send back a report from remote mail servers, and contains details such as how many mails were received from your domain, how many failed authentication, from which IPs and which authentication tests failed. I've had a DMARC record published for my domains for a few months now, but I have not setup any filter to check incoming mail for their DMARC records, or sending back forensic reports. Today, I was in the process of setting up a third backup MX for my domains, so I thought I'd clean up my configs a little, and also setup DMARC properly in my mail servers. So in this article, I will be discussing how I setup my Postfix servers using Greylisting, SPF, DKIM and DMARC, and also using TLS for incoming/outgoing mail. I won't be going into full details for how to setup a Postfix server, only the specifics needed for SPF/DKIM/DMARC and TLS. We'll start with TLS as that is easiest. ## TLS I wanted all incoming and outgoing mail to use opportunistic TLS. To do this all you need to do is create a certificate: ```console [root@servah ~]# cd /etc/postfix/ [root@servah ~]# openssl genrsa -des3 -out mx1.example.org.key [root@servah ~]# openssl rsa -in mx1.example.org.key -out mx1.example.org.key-nopass [root@servah ~]# mv mx1.example.org.key-nopass mx1.example.org.key [root@servah ~]# openssl req -new -key mx1.example.org.key -out mx1.example.org.csr ``` Now, you can either self sign it the certificate request, or do as I have and use [CAcert.org](http://www.cacert.org/). Once you have a signed certificate, dump it in mx1.example.crt, and tell postfix to use it in `/etc/postfix/main.cf`: ```plain # Use opportunistic TLS (STARTTLS) for outgoing mail if the remote server supports it. smtp_tls_security_level = may # Tell Postfix where your ca-bundle is or it will complain about trust issues! smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.trust.crt # I wanted a little more logging than default for outgoing mail. smtp_tls_loglevel = 1 # Offer opportunistic TLS (STARTTLS) to connections to this mail server. smtpd_tls_security_level = may # Add TLS information to the message headers smtpd_tls_received_header = yes # Point this to your CA file. If you used CAcert.org, this # available at http://www.cacert.org/certs/root.crt smtpd_tls_CAfile = /etc/postfix/ca.crt # Point at your cert and key smtpd_tls_cert_file = /etc/postfix/mx1.example.org.crt smtpd_tls_key_file = /etc/postfix/mx1.example.org.key # I wanted a little more logging than default for incoming mail. smtpd_tls_loglevel = 1 ``` Restart Postfix: ```console [root@servah ~]# service postfix restart ``` That should do it for TLS. I tested by sending an email from my email server, to my Gmail account, and back again, checking in the logs to see if the connections were indeed using TLS. ## Greylisting [Greylisting](http://en.wikipedia.org/wiki/Greylisting) is method of reducing spam, which is so simple, yet so effective it's quite amazing! Basically, incoming relay attempts are temporarily delayed with a SMTP temporary reject for a fixed amount of time. Once this time has finished, any further attempts to relay from that IP are allowed to progress further through your ACLs. This is extremely effective, as a lot of spam bots will not have any queueing system, and will not re-try to send the message! As EPEL already has an RPM for [Postgrey](http://postgrey.schweikert.ch/), so I'll use that for Greylisting: ```console [root@servah ~]# yum install postgrey ``` Set it to start on boot, and manually start it: ```console [root@servah ~]# chkconfig postgrey on [root@servah ~]# service postgrey start ``` Next we need to tell Postfix to pass messages through Postgrey. By default, the RPM provided init scripts setup a unix socket in `/var/spool/postfix/postgrey/socket` so we'll use that. Edit `/etc/postfix/main.cf`, and in your `smtpd_recipient_restrictions`, add `check_policy_service unix:postgrey/socket`, like I have: ```plain smtpd_recipient_restrictions= permit_mynetworks, reject_invalid_hostname, reject_unknown_recipient_domain, reject_non_fqdn_recipient, permit_sasl_authenticated, reject_unauth_destination, check_policy_service unix:postgrey/socket, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client zen.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client b.barracudacentral.org, reject_rbl_client dnsbl-1.uceprotect.net, permit ``` As you can see, I am also using various RBLs. Next, we restart Postfix: ```console [root@servah ~]# service postfix restart ``` All done. Greylisting is now in effect! ## SPF Next we'll setup SPF. There are many different SPF filters available, and probably the most popular one to use with Postfix would be [pypolicyd-spf](https://launchpad.net/pypolicyd-spf/), which is also included in EPEL, but I was unable to get OpenDMARC to see the Recieved-SPF headers. I think this is due to the order in which a message is passed through a milter and through a postfix policy engine, and I was unable to find a workaround. So instead I decided to use [smf-spf](https://github.com/flowerysong/smf-spf), which is currently unmaintained, but from what I understand it is quite widely used, and quite stable. I did apply some patches to smf-spf which were posted by Andreas Schulze on the the [OpenDMARC mailing lists](http://www.trusteddomain.org/pipermail/opendmarc-users/2013-June/000153.html). They are mainly cosmetic patches, and aren't necessary but I liked them so I applied them. I was going to write a RPM spec file for smf-spf, but I noticed that [Matt Domsch](http://domsch.com/blog/) has kindly already submitted packages for [smf-spf](https://bugzilla.redhat.com/show_bug.cgi?id=1057876) and [libspf2](https://bugzilla.redhat.com/show_bug.cgi?id=1057874) for review. I did have to modify both packages a little. For smf-spf I pretty much only added the patches I mentioned eariler, and a few minor changes I wanted. For libspf2 I had to re-run autoreconf and update Matt Domsch's patch as it seemed to break on EL6 boxes due to incompatible autoconf versions. I will edit this post later and add links to the SRPMS later. I build the RPMs, signed them with my key and published it in my internal RPM repo. I won't go into detail into that, and will continue from installation: ```console [root@servah ~]# yum install smf-spf ``` Next, I edited `/etc/mail/smfs/smf-spf.conf`, set smf-spf to start on boot and started smf-spf: ```plain WhitelistIP 127.0.0.0/8 RefuseFail on AddHeader on User smfs Socket inet:8890@localhost ``` Set smf-spf to start on boot, and also start it manually: ```console [root@servah ~]# chkconfig smf-spf on [root@servah ~]# service smf-spf start ``` Now we edit the Postfix config again, and add the following to the end of `main.cf`: ```plain milter_default_action = accept milter_protocol = 6 smtpd_milters = inet:localhost:8890 ``` Restart Postfix: ```console [root@servah ~]# service postfix restart ``` Your mail server should now be checking SPF records! 🙂 You can test this by trying to forge an email from Gmail or something. ## DKIM DKIM was a little more complicated to setup as I have multiple domains. Luckily, OpenDKIM is already in EPEL, so I didn't have to do any work to get an RPM for it! 🙂 Install it using yum: ```console [root@servah ~]# yum install opendkim ``` Next, edit the OpenDKIM config file. I'll just show what I done using a diff: ```diff 20c20 < Mode v --- > Mode sv 58c58 < Selector default --- > #Selector default 70c70 < #KeyTable /etc/opendkim/KeyTable --- > KeyTable /etc/opendkim/KeyTable 75c75 < #SigningTable refile:/etc/opendkim/SigningTable --- > SigningTable refile:/etc/opendkim/SigningTable 79c79 < #ExternalIgnoreList refile:/etc/opendkim/TrustedHosts --- > ExternalIgnoreList refile:/etc/opendkim/TrustedHosts 82c82 < #InternalHosts refile:/etc/opendkim/TrustedHosts --- > InternalHosts refile:/etc/opendkim/TrustedHosts ``` Next, I created a key: ```console [root@servah ~]# cd /etc/opendkim/keys [root@servah ~]# opendkim-genkey --append-domain --bits=2048 --domain example.org --selector=dkim2k --restrict --verbose ``` This will give you two files in `/etc/opendkim/keys`: - `dkim2k.txt` - Contains your public key which can be published in DNS. It's already in a BIND compatible format, so I won't explain how to publish this in DNS. - `dkim2k.private` - Contains your private key. Next, we edit `/etc/opendkim/KeyTable`. Comment out any of the default keys that are there and add your own: ```console [root@servah ~]# cat /etc/opendkim/KeyTable dkim2k._domainkey.example.org example.org:dkim2k:/etc/opendkim/keys/dkim2k.private ``` Now edit `/etc/opendkim/SigningTable`, again commenting out the default entries and entering our own: ```console [root@servah ~]# cat /etc/opendkim/SigningTable *@example.org dkim2k._domainkey.example.org ``` Repeat this process for as many domains as you want. It would also be quite a good idea to use different keys for different domains. We can now start opendkim, and set it to start on boot: ```console [root@servah ~]# chkconfig opendkim on [root@servah ~]# service opendkim start ``` Almost done with DKIM! We just need to tell Postfix to pass mail through OpenDKIM to verify signatures of incoming mail, and to sign outgoing mail. To do this, edit `/etc/postfix/main.cf` again: ```plan # Pass SMTP messages through smf-spf first, then OpenDKIM smtpd_milters = inet:localhost:8890, inet:localhost:8891 # This line is so mail received from the command line, e.g. using the sendmail binary or mail() in PHP # is signed as well. non_smtpd_milters = inet:localhost:8891 ``` Restart Postfix: ```console [root@servah ~]# service postfix restart ``` Done with DKIM! Now your mail server will verify incoming messages that have a DKIM header, and sign outgoing messages with your own! ## OpenDMARC Now it's the final part of the puzzle. OpenDMARC is not yet in EPEL, but again I did find an [RPM spec waiting review](https://bugzilla.redhat.com/show_bug.cgi?id=905304), so I used it. Again, I won't go into the process of how to build an RPM, lets assume you have already published it in your own internal repos and continue from installation: ```console [root@servah ~]# yum install opendmarc ``` First I edited `/etc/opendmarc.conf`: ```diff 15c15 < # AuthservID name --- > AuthservID mx1.example.org 121c121 < # ForensicReports false --- > ForensicReports true 144,145c144 < HistoryFile /var/run/opendmarc/opendmarc.dat/; < s --- > HistoryFile /var/run/opendmarc/opendmarc.dat 221c220 < # ReportCommand /usr/sbin/sendmail -t --- > ReportCommand /usr/sbin/sendmail -t -F 'Example.org DMARC Report" -f 'sysops@example.org' 236c235 < # Socket inet:8893@localhost --- > Socket inet:8893@localhost 246c245 < # SoftwareHeader false --- > SoftwareHeader true 253c252 < # Syslog false --- > Syslog true 261c260 < # SyslogFacility mail --- > SyslogFacility mail 301c300 < # UserID opendmarc --- > UserID opendmarc ``` Next, set OpenDMARC to start on boot and manually start it: ```console [root@servah ~]# chkconfig opendmarc on [root@servah ~]# service opendmarc start ``` Now we tell postfix to pass messages through OpenDMARC. To do this, we edit `/etc/postfix/main.cf` once again: ```plain # Pass SMTP messages through smf-spf first, then OpenDKIM, then OpenDMARC smtpd_milters = inet:localhost:8890, inet:localhost:8891, inet:localhost:8893 ``` Restart Postfix: ```console [root@servah ~]# service postfix restart ``` That's it! Your mail server will now check the DMARC record of incoming mail, and check the SPF and DKIM results. I confirmed that OpenDMARC is working by sending a message from Gmail to my own email, and checking the message headers, then also sending an email back and checking the headers on the Gmail side. You should see that SPF, DKIM and DMARC are all being checked when receiving on either side. Finally, we can also setup forensic reporting for the benefit of others who are using DMARC. ## DMARC Forensic Reporting I  found OpenDMARC's documentation to be extremely limited and quite vague, so there was a lot of guess work involved. As I didn't want my mail servers to have access to my DB server, I decided to run the reporting scripts on a different box I use for running cron jobs. First I created a MySQL database and user for opendmarc: ```console [root@mysqlserver ~]# mysql -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 1474392 Server version: 5.5.34-MariaDB-log MariaDB Server` Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE opendmarc; MariaDB [(none)]> GRANT ALL PRIVILEGES ON opendmarc.* TO opendmarc@'script-server.example.org' IDENTIFIED BY 'supersecurepassword'; ``` Next, we import the schema into the database: ```console [root@scripty ~]# mysql -h mysql.example.org -u opendmarc -p opendmarc < /usr/share/doc/opendmarc-1.1.3/schema.mysql ``` Now, to actually import the data from my mail servers into the DB, and send out the forensics reports, I have the following script running daily: ```bash #!/bin/bash set -e cd /home/mhamzahkhan/dmarc/ HOSTS=";mx1.example.org mx2.example.org mx3.example.org" DBHOST='mysql.example.org' DBUSER='opendmarc' DBPASS='supersecurepassword' DBNAME='opendmarc' for HOST in $HOSTS; do # Pull the history from each host scp -i /home/mhamzahkhan/.ssh/dmarc root@${HOST}:/var/run/opendmarc/opendmarc.dat ${HOST}.dat # Purge history on each each host. ssh -i /home/mhamzahkhan/.ssh/dmarc root@${HOST} "cat /dev/null > /var/run/opendmarc/opendmarc.dat"; # Merge the history files. Not needed, but this way opendmarc-import only needs to run once. cat ${HOST}.dat >> merged.dat done /usr/sbin/opendmarc-import --dbhost=${DBHOST} --dbuser=${DBUSER} --dbpasswd=${DBPASS} --dbname=${DBNAME} --verbose < merged.dat /usr/sbin/opendmarc-reports --dbhost=${DBHOST} --dbuser=${DBUSER} --dbpasswd=${DBPASS} --dbname=${DBNAME} --verbose --interval=86400 --report-email 'sysops@example.org' --report-org 'Example.org' /usr/sbin/opendmarc-expire --dbhost=${DBHOST} --dbuser=${DBUSER} --dbpasswd=${DBPASS} --dbname=${DBNAME} --verbose rm -rf *.dat ``` That's it! Run that daily, and you'll send forensic reports to those who want them. 🙂 You now have a nice mail server that checks SPF, DKIM, and DMARC for authentication, and sends out forensic reports! With this setup, I haven't received any spam in the last two months! That's just as far as I can remember, but I'm sure it's been a lot longer than that! 🙂 Any comments, and suggestions welcome! -------------------------------------------------------------------------------- title: "Home Lab: Added a Cisco 3845 ISR" date: "2013-12-25" url: https://www.hamzahkhan.com/home-lab-added-cisco-3845-isr/ -------------------------------------------------------------------------------- Why? Well, I wanted more ISRs in my home lab. That, plus my ISP ([Virgin Media](http://www.virginmedia.com/)), will be upgrading my line from 120 Mb/s to 152 Mb/s in the second half of 2014. Looking at the Cisco docs, the 2851 ISR I am using can only do up to around 112 Mb/s/s. Although there is quite a bit of time before Virgin Media actually go forward with this upgrade, I saw the 3845 going reasonably cheap on eBay, cheaper than what I expect it will be next year when my connection gets upgraded. So, I decided to just buy it now. 🙂 I am starting to have a problem with space for my home lab. My rack is already pretty much fully populated, so I now have equipment on top of, and surrounding my rack. I don't have space for a second rack at the moment, so it looks like I can't expand my lab any more for a while. Oh well. 🙁 -------------------------------------------------------------------------------- title: "Home Lab Network Redesign Part 2: The Edge Routers" date: "2013-08-10" url: https://www.hamzahkhan.com/home-lab-network-redesign-part-2-the-edge-routers/ -------------------------------------------------------------------------------- As I have never used a Mikrotik router before, there was quite a big learning curve. I've only really used Cisco/Juniper like interfaces to configure routers, and I'm a fan of them. Even though I have gotten a little more used to the RouterOS command line, I must say I'm not a huge fan of it. Most of the reasons are quite minor reasons, but some of the reasons I don't really like it is: - I find it silly how the menus are structured. For example, I have to first configure an interface in `/interface` context first, then switch context to `/ip address` to add an IP address. Same goes for just getting an IP from a DHCP server. To do this, you can't do it from the `/ip address` context, but rather `/ip dhcp-client` context. There are many other cases of this, and while none of this is really a big deal, I find it is quite inconvenient. I want to configure the options for a single interface in one place. - There are a lot of little things I think ROS is lacking. For example, creating a GRE tunnel from the `/interface gre` context, you have to provide a local-address to source the packets from. This is a pain because if you are on a dynamic IP address, it involves an extra step of editing the address every time your address changes. On Cisco routers, you can just do `tunnel source $INTERFACE` and it'll automagically use the correct source address. This is also for adding routes via the DHCP provided default gateway. On IOS, I can just do `ip route 8.8.8.8 255.255.255.255 dhcp` to route some packets explicitly via the DHCP assigned default gateway. This is useful because in order to reach my dedicated server, I need a single route via my DHCP assigned default gateway, before BGP from my dedicated server pushes down a new default route. In ROS you can't do this, and have to add a static route manually yourself, and edit it each time your address changes. Again, these are minor things, but I'm sure there are some bigger things which I cannot remember at the moment. To be fair, these reasons are quite minor, and considering the price difference between a Mikrotik router, and a Cisco/Juniper router, I guess it is acceptable. In terms of setting up the RB2011UAS-RM, I wanted to keep the config as simple as possible: - Make the DHCP client add the default route with a distance of 250. This allows the default route pushed from my dedicated server have priority, and be the active route. - Add a static route to my dedicated server via the DHCP assigned default gateway. - Setup VRRP on the "inside" interfaces of both edge routers - Setup GRE tunnels back to my dedicated server - Configure BGP between both edge routers to the dedicated server, and BGP peering to each other via the point-to-point connection. - Added static routes to my internal network behind my ASAs. I didn't want to add any masquerading/NAT rules on the edge routers, because I felt it'll add extra CPU load for no reason since the default route will be via the dedicated server, and NAT will be done there, but I dedicated it might be better to just add a rule to NAT any traffic going straight out to the internet (not via the GRE tunnels) just in case for whatever reason, the BGP sessions on both routers were down, and traffic was no longer going via my dedicated server. That's pretty much it for the edge routers. It's simple, and it's working well so far! Again, I can share config files if anyone wants to look at them! -------------------------------------------------------------------------------- title: "Home Lab Network Redesign Part 1: The Remote Dedicated Server" date: "2013-08-02" url: https://www.hamzahkhan.com/home-lab-network-redesign-part-1-the-remote-dedicated-server/ -------------------------------------------------------------------------------- ![Home Lab Diagram](/images/2013/home-lab-network-redesign-part-1-the-remote-dedicated-server/home-lab.png) As promised, here is a very very basic diagram of my home lab. This is quite a high level overview of it, and the layer 2 information is not present as I suck at Visio, and all the connectors were getting messy on Visio with the layer 2 stuff present! What is not shown in the digram: 1. There are two back-to-back links between the edge routers which are in an active-passive bond. 2. Each edge router has two links going into two switches (one link per switch), both these links are in an active-passive bonded interface. 3. The two edge firewalls only have two links going to each of those switches. One port is in the "inside" VLAN, and the other is on the "outside" VLAN. I wanted to have two links per VLAN, going to both switches, but the Cisco ASAs don't do STP, or Port-Channels so I having two links would have made a loop. 4. The link between the two ASAs is actually going through a single switch on a dedicated failover VLAN. From reading around, the ASAs go a little crazy sometimes if you use a crossover cable as the secondary will see it's own port go down as well in the event the primary fails. It seems that this can cause some funny things to happen. Using a switch between them means that if the primary goes down, the secondary ASA's port will still stay up avoiding any funniness. 5. The core gateway only has two interfaces, each going two a different switch. One port is on the "inside" VLAN that the firewalls are connected to, and the other port is a trunk port with all my other VLANs. This isn't very redundant, but I'm hoping to put in a second router when I have some more rack space and use HSRP to allow high availability. As I mentioned in my previous post, I have a dedicated server hosted with [Rapid Switch](http://www.rapidswitch.com), through I wanted to route all my connections. There were a few reasons I wanted to do this: 1. Without routing through the dedicated server, if one of my internet connections went down, and I failed over to the other, then my IP would be different from my primary line. This will mess up some sessions, and create a problem for DNS as I can only really point records at one line or the other. 2. My ISP only provides dynamic IP addresses. Although the DHCP lease is long enough to not make the IP addresses change often, it's a pain updating DNS everywhere on the occasions that it does change. Routing via my dedicated server allows me to effectively have a static IP address, I only really need to change the end point IPs for the GRE tunnels should my Virgin Media provided IP change. 3. I also get the benefit of  being able to order more IPs if needed, Virgin Media do not offer more than one! 4. Routing via my dedicated server at Rapid Switch also has the benefit of keeping my IP even if I change my home ISP. The basic setup of the dedicated server is as follows: 1. There is a GRE tunnel going from the dedicated server (diamond) to each of my edge routers. Both GRE tunnels have a private IPv4 address, and an IPv6 address. The actual GRE tunnel is transported over IPv4. 2. I used Quagga to add the IPv6 address to the GRE tunnels as the native Red Hat ifup scripts for tunnels don't allow you to add an IPv6 address through them. 3. I used Quagga's BGPd to create a iBGP peering over the GRE tunnels to each of the Mikrotik routers, and push down a default route to them. The edge routers also announced my internal networks back to the dedicated server. 4. I originally wanted to use eBGP between the dedicated servers and the edge routers, but I was having some issues where the BGP session wouldn't establish if I used different ASNs. I'm still looking into that. 5. There are some basic iptables rules just forwarding ports, doing NAT, and a cleaning up some packets before passing them over the GRE tunnel, but that's all really. Other than that, there isn't much to see on the dedicated server. It's quite a simple setup on there. If anyone would like to see more, I can post any relevant config. -------------------------------------------------------------------------------- title: "Home Lab Network Redesign with Mikrotik Routers" date: "2013-07-30" url: https://www.hamzahkhan.com/home-lab-network-redesign-with-mikrotik-routers/ -------------------------------------------------------------------------------- I currently have two cable connections from Virgin Media coming into my house due to some annoying contract problems while moving. I originally had one line on the 60 Mb/s package, and the other on 100 Mb/s, but when Virgin Media upgraded me to 120 Mb/s I downgraded the 60 Mb/s line to 30 Mb/s to reduce costs. Since I got into this strange arrangement with Virgin Media, I have been using two separate routers for the connections. A [Cisco 1841 Integrated Services Router](http://www.cisco.com/en/US/products/ps5875/index.html) on the 30 Mb/s line, and a [Cisco 2821 Integrated Services Router](http://www.cisco.com/en/US/products/ps5880/index.html) on the 120 Mb/s line, but I found that I wasn't able to max out the faster line using the Cisco 2821 ISR. Looking at [Cisco's performance sheet](http://www.cisco.com/web/partners/downloads/765/tools/quickreference/routerperformance.pdf), the Cisco 2821 ISR is only really designed to support lines of up to around 87 Mb/s, and that's without NAT! So it was time to upgrade! Initially I wanted to get a faster Cisco router, but looking at the second generation ISRs, it's a bit too expensive for a home lab! I did actually upgrade all my [7204 VXRs]({{< relref "2013-02-04-two-more-cisco-7204-vxrs-added-to-my-home-lab.md" >}}) to have [NPE-400](http://www.cisco.com/en/US/prod/collateral/routers/ps341/product_data_sheet09186a00800ae715.html) modules, which according to the performance sheet should do around 215 Mb/s, but the 7204s are extremely loud, so I only switch them on when I am playing with them. A few of my friends have mentioned good things about [Mikrotik](http://www.mikrotik.com/) routers, so I figured since a new Cisco ISR isn't possible, I'll give Mikrotik a chance. I ended up buying two [RouterBOARD 2011UAS-RM](http://routerboard.com/RB2011UAS-RM) from [WiFi Stock](http://www.wifi-stock.co.uk/). To integrate the the RB2011UAS-RM boxes into my network, I decided I was going to restructure my network a bit. I will be making a series of posts discussing my re-designed network. My goals for the redesign were as follows: - The RB2011UAS-RM boxes will only function as edge routers, encapsulating traffic in GRE tunnels, and that's all. - There will be a link between both edge routers, with a BGP peering for redirecting traffic should one of my lines go down. - They will have GRE tunnels to all my dedicated servers/VPSs. - I will use Quagga on all dedicated servers, and VPSs outside my network to create BGP peerings with my edge routers. - I wanted to route all my internet out of a server I currently have hosted with [Rapid Switch](http://www.rapidswitch.com), so BGP on the RapidSwitch machine will have to push down a default route. - I wanted to use a [Cisco ASA 5505 Adaptive Security Appliance](http://www.cisco.com/cisco/web/support/model/tsd_hardware_asa_model_5505.html#0) as a firewall between the edge routers and the rest of my internal network. - I recently bought a [Cisco 2851 Integrated Services Router](http://www.cisco.com/en/US/products/ps5882/index.html), which I will use as a "core" router. - I wanted as much redundancy as possible. In my next post I will create a diagram of what I will be doing, and discussing the setup of the server I have hosted at RapidSwitch. As I have never used Mikrotik routers before, I will also attempt to document my experiences of RouterOS so far as I go along. -------------------------------------------------------------------------------- title: "Connecting to Usenet via Two Internet Connections" date: "2013-03-18" url: https://www.hamzahkhan.com/connecting-to-usenet-via-two-internet-connections/ -------------------------------------------------------------------------------- I currently have two connections from Virgin Media at home and I wanted to use them both to grab content from usenet. My Usenet provider is [Supernews](http://www.supernews.com), I've used them for a couple of months, and from what I understand they are actually just rebranded product of [Giganews](https://www.giganews.com/). Supernews only actually allow you to connect to their servers from one IP per account, so even if I had set up load balancing to split connections over both my connections, it would not have worked very well for usenet as I will be going out via two IP addresses, so for this reason I decided to take another route. I have a dedicated server with [OVH](http://www.ovh.co.uk) which has a 100 Mb/s line, my two lines with Virgin Media are 60 Mb/s and 30 Mb/s, so I figured if I route my traffic out via my dedicated server, I should still be able to saturate both connections at home. 🙂 The way I done this was to create two tunnels on my Cisco 2821 Integrated Services Router going to my dedicated server, one tunnel per WAN connection and basically "port forwarding" port 119 and 443 coming over the tunnels to go to Supernews. It's working great so far and saturating both lines fully! The way I done this was as follows: First I setup the tunnels on my trusty Cisco 2821 ISR: ```cisco interface Tunnel1 description Tunnel to Dedi via WAN1 ip address 10.42.42.1 255.255.255.252 no ip redirects no ip unreachables no ip proxy-arp ip tcp adjust-mss 1420 tunnel source GigabitEthernet0/0.10 tunnel mode ipip tunnel destination 123.123.123.123 interface Tunnel2 description Tunnel to Dedi via WAN2 ip address 10.42.42.5 255.255.255.252 no ip redirects no ip unreachables no ip proxy-arp ip tcp adjust-mss 1420 tunnel source GigabitEthernet0/1.11 tunnel mode ipip tunnel destination 123.123.123.123 ``` That isn't the complete configuration, I also decided to NAT my home network to the IPs of the two tunnels. This was just in order to do it quickly. If I had not used NAT on the two tunnels, I would have to put a route on my dedicated server for my home network's private IP range. Although this is easy, I was mainly doing this out of curiosity to see if it would work, and to do it without NAT on the tunnels I would have had to figure out how to do policy based routing in order to overcome asymmetric routing on Linux. That can be a project for another day. 🙂 My dedicated is running RHEL6, so to set up the tunnel on the dedicated server I created the relevant `ifcfg-tunl*` files: ```bash [root@moka ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl1 DEVICE="tunl1" BOOTPROTO="none" ONBOOT="yes" TYPE="IPIP" PEER_OUTER_IPADDR="IP_OF_WAN_1" PEER_INNER_IPADDR="10.42.42.1" MY_OUTER_IPADDR="123.123.123.123" MY_INNER_IPADDR="10.42.42.2" [root@moka ~]# cat /etc/sysconfig/network-scripts/ifcfg-tunl2 DEVICE="tunl2" BOOTPROTO="none" ONBOOT="yes" TYPE="IPIP" PEER_OUTER_IPADDR="IP_OF_WAN_2" PEER_INNER_IPADDR="10.42.42.5" MY_OUTER_IPADDR="123.123.123.123" MY_INNER_IPADDR="10.42.42.6" ``` I don't really want to go into detail on how configure netfilter rules using IPtables, so I will only paste the relevant lines of my firewall script: ```bash # This rule masquerades all packets going out of eth0 to the IP of eth0 iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # Forward packets coming in from tunl1 with the destination IP of 10.42.42.2 and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP iptables -t nat -A PREROUTING -p tcp -i tunl1 -d 10.42.42.2 --dport 119 -j DNAT --to 138.199.67.30 iptables -t nat -A PREROUTING -p tcp -i tunl1 -d 10.42.42.2 --dport 443 -j DNAT --to 138.199.67.30 # Forward packets coming in from tunl2 with the destination IP of 10.42.42.6 and a source port of either 119 or 443 (Supernews use 443 for NNTP SSL port) to Supernews' server IP iptables -t nat -A PREROUTING -p tcp -i tunl2 -d 10.42.42.6 --dport 119 -j DNAT --to 138.199.67.30 iptables -t nat -A PREROUTING -p tcp -i tunl2 -d 10.42.42.6 --dport 443 -j DNAT --to 138.199.67.30 ``` That's all there is to it really! Next, I just added two servers in my usenet client, one pointing at 10.42.42.2 and the other at 10.42.42.6. And magic! Now both lines will be used when my usenet client is doing its thing! -------------------------------------------------------------------------------- title: "Two more Cisco 7204 VXRs Added to My Home Lab!" date: "2013-02-04" url: https://www.hamzahkhan.com/two-more-cisco-7204-vxrs-added-to-my-home-lab/ -------------------------------------------------------------------------------- ![Cisco 7204 VXRs in My Home Lab](/images/2013/two-more-cisco-7204-vxrs-added-to-my-home-lab/cisco-7204vxr-lab.jpg) Last week, I was browsing eBay (as you do!), and noticed two [Cisco 7204 VXR](http://www.cisco.com/en/US/products/hw/routers/ps341/ps347/index.html) routers auctions which were about to end pretty soon and the starting bid was £0.99, and there were no current bids! I've wanted to play with the bigger Cisco routers for a while. I have played with the Cisco ISRs, which are designed more for branch/smaller offices. I already have one 7204 VXR in my rack, but adding two more couldn't hurt, so I figured I would go ahead and try my luck and bid. To my surprise, I won both! I managed to win one of them for £20, and the other for £0.99! £20.99 for two 7204 VXRs isn't bad at all, just a quick search on eBay shows that the [NPE-300s](http://www.cisco.com/en/US/docs/routers/7200/install_and_upgrade/network_process_engine_install_config/4448o3.html), which came with both routers, is generally selling for £30, so I'm quite pleased. The I/O controllers ([C7200-I/O](http://www.cisco.com/en/US/products/hw/routers/ps341/products_data_sheet09186a0080088724.html)) are a bit old, and use DB-25 connector for the console port and not the normal RJ-45 that most Cisco devices use. The I/O controller also don't have any Ethernet ports either, but I did get some FastEthernet modules with both routers. I will probably upgrade the I/O controllers to [C7200-I/O-2FE/E](http://www.cisco.com/en/US/products/hw/routers/ps341/products_data_sheet09186a0080088724.html) some time this year, but for now, it will do. 🙂 I now have three 7204 VXRs in my rack, the first one I bought last year some time. In the picture: - Top 7204 VXR has: [NPE-225](http://www.cisco.com/en/US/prod/collateral/routers/ps341/product_data_sheet09186a0080092132.html), 128MB RAM, C7200-I/O, Dual FastEthernet Module and an Enhanced ATM module ([ATM PA-A3](http://www.cisco.com/en/US/products/hw/modules/ps2033/products_qanda_item09186a00801d5885.shtml)). - Middle 7204 VXR has: [NPE-300](http://www.cisco.com/en/US/docs/routers/7200/install_and_upgrade/network_process_engine_install_config/4448o3.html) with 256MB RAM, C7200-I/O, Single EthernetModule, and an Enhanced ATM module (ATM PA-A3). - Bottom 7204 VXR has: [NPE-300](http://www.cisco.com/en/US/docs/routers/7200/install_and_upgrade/network_process_engine_install_config/4448o3.html) with 256MB RAM, C7200-I/O-2FE/E, and an Enhanced ATM module (ATM PA-A3). I'm not really sure if the Enhanced ATM modules will be of any use to me, as I don't think it is possible to use them back-to-back (please correct me if I am wrong!). I do want to get a few Cisco [PA-4T+](http://www.cisco.com/en/US/products/hw/modules/ps2033/products_data_sheet09186a0080091cd2.html) 4 Port Serial modules but that's for later on. -------------------------------------------------------------------------------- title: "Cisco ASA 5505 RAM Upgrade" date: "2013-01-20" url: https://www.hamzahkhan.com/cisco-asa-5505-ram-upgrade/ -------------------------------------------------------------------------------- {{% notice info %}} 3rd June 2014 - If you are reading this post, you should check out my follow up post: [Cisco ASA 9.2 on Cisco ASA 5505 with Unsupported Memory Configuration Fail]({{< relref "2014-06-03-cisco-asa-9-2-cisco-asa-5505-unsupported-memory-configuration-fail.md" >}}) {{% /notice %}} I have two [Cisco ASA 5505s](http://www.cisco.com/en/US/products/ps6120/index.html) in my home lab which I acquired almost two years ago from eBay. I was pretty lucky, as I paid under £70 for each because the seller wasn't too sure what they were! Looking on eBay now, they are selling for around £120! 🙂 Pretty much straight away, I wanted to upgrade to the ASA 8.3 code, which required a RAM upgrade, so I upgraded it. Starting from ASA 8.3, the minimum required RAM needed to run 8.3 code and newer on a 5505 is 512MB. This is also the maximum officially supported amount of RAM. Buying official Cisco RAM is, as always, quite expensive, but since the ASA 5505 uses standard DDR RAM, it is actually possible to use third-party RAM in the ASA 5505. When I originally performed this upgrade, I found that on various forums many people had actually upgraded past the official supported amount of RAM, and upgraded their ASA 5505s to 1GB RAM. Intrigued by this, and due to needing the extra RAM for the 8.3 code, I decided to upgrade both my ASAs to 1GB as well! There aren't any ground breaking advantages to upgrading to 1GB as far as I know. I'm guessing the ASA will be able to hold a lot more entries in the NAT table, but I don't really push my ASAs to their limits anyway. I ended up buying two [CT12864Z40B](http://www.crucial.com/uk/store/partspecs.aspx?imodule=CT12864Z40B) sticks from [Crucial](http://www.crucial.com/), which have worked flawlessly for the past year. Almost 14 months later, I needed to crack open the case of the ASAs again to get to the CompactFlash. I thought it would be a good idea to make a quick post about the RAM upgrade process while I'm at it. The upgrade is very easy, anyone could do it, but I was bored, and wanted to write a blog post! 🙂 1. Place the ASA upside down, and unscrew the three screws at the bottom. ![Bottom of Cisco ASA 5505](/images/2013/cisco-asa-5505-ram-upgrade/cisco-asa5505-ram-upgrade-1.jpg) 2. Remove the cover. ![Cisco ASA 5505 Internals](/images/2013/cisco-asa-5505-ram-upgrade/cisco-asa5505-ram-upgrade-2.jpg) 3. Take out the old RAM, and put in the new RAM. ![Cisco ASA 5505 Original RAM](/images/2013/cisco-asa-5505-ram-upgrade/cisco-asa5505-ram-upgrade-3.jpg) 4. You can optionally also upgrade the CompactFlash at this time. I'm using the stock 128MB that came with the ASAs at the moment, but I will probably upgrade sometime soon. 🙂 ![Cisco ASA 5505 CompactFlash](/images/2013/cisco-asa-5505-ram-upgrade/cisco-asa5505-ram-upgrade-4.jpg) 5. Close everything up, and plug-in the power! ![Cisco ASA 5505 Failover Pair](/images/2013/cisco-asa-5505-ram-upgrade/cisco-asa5505-ram-upgrade-5.jpg) All done!. I plan to upgrade the CompactFlash to 4GB as well so I have more working space when I am using the "packet sniffer" built into the ASA. This is a very easy process as well, but you have to be careful to copy over your licence files as well. I will be making a post about this as well if I go ahead with that upgrade. -------------------------------------------------------------------------------- title: "Nginx, Varnish, HAProxy, and Thin/Lighttpd" date: "2009-09-29" url: https://www.hamzahkhan.com/nginx-varnish-haproxy-and-thinlighttpd/ -------------------------------------------------------------------------------- Over the last few days, I have been playing with Ruby on Rails again and came across Thin, a small, yet stable web server which will serve applications written in Ruby. This is a small tutorial on how to get Nginx, Varnish, HAProxy working together with Thin (for dynamic pages) and Lighttpd (for static pages). I decided to take this route as from reading in many places I found that separating static and dynamic content improves performance significantly. ## Nginx Nginx is a lightweight, high performance web server and reverse proxy. It can also be used as an email proxy, although this is not an area I have explored. I will be using Nginx as the front-end server for serving my rails applications. I installed Nginx using the RHEL binary package available from EPEL. Configuration of Nginx is very simple. I have kept it very simple, and made Nginx My current configuration file consists of the following: ```nginx user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $request "$status" $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; # This section enables gzip compression. gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # Here you can define the addresses on which varnish will listen. You can place multiple servers here, and nginx will load balance between them. upstream cache_servers { server localhost:6081 max_fails=3 fail_timeout=30s; } # This is the default virtual host. server { listen 80 default; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log; charset utf-8; # This is optional. It serves up a 1x1 blank gif image from RAM. location = /1x1.gif { empty_gif; } # This is the actual part which will proxy all connections to varnish. location / { proxy_pass http://cache_servers/; proxy_redirect http://cache_servers/ http://$host:$server_port/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } } ``` ## Varnish Varnish is a high performance caching server. We can use Varnish to cache content which will not be changed often. I installed Varnish using the RHEL binary package available from EPEL as well. Initially, I only needed to edit `/etc/sysconfig/varnish`, and configure the address on which varnish will listen on. ```bash DAEMON_OPTS="-a localhost:6081 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -u varnish -g varnish \ -s file,/var/lib/varnish/varnish_storage.bin,10G"` ``` This will make varnish listen on port 6081 for normal HTTP traffic, and port 8082 for administration. Next, you must edit `/etc/varnish/default.vcl` to actually cache data. My current configuration is as follows: ```perl backend thin { .host = "127.0.0.1"; .port = "8080"; } backend lighttpd { .host = "127.0.0.1"; .port = "8081"; } sub vcl_recv { if (req.url ~ "^/static/") { set req.backend = lighttpd; } else { set req.backend = thin; } # Allow purging of cache using shift + reload if (req.http.Cache-Control ~ "no-cache") { purge_url(req.url); } # Unset any cookies and autorization data for static links and icons, and fetch from catch if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") { unset req.http.cookie; unset req.http.Authorization; lookup; } # Look for images in the cache if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") { unset req.http.cookie; lookup; } # Do not cache any POST'ed data if (req.request == "POST") { pass; } # Do not cache any non-standard requests if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { pass; } # Do not cache data which has an autorization header if (req.http.Authorization) { pass; } lookup; } sub vcl_fetch { # Remove cookies and cache static content for 12 hours if (req.request == "GET" && req.url ~ "^/static/" || req.request == "GET" && req.url ~ "^/icons/") { unset obj.http.Set-Cookie; set obj.ttl = 12h; deliver; } # Remove cookies and cache images for 12 hours if (req.url ~ "\.(png|gif|jpg|ico|jpeg|swf|css|js)$") { unset obj.http.set-cookie; set obj.ttl = 12h; deliver; } # Do not cache anything that does not return a value in the 200's if (obj.status >= 300) { pass; } # Do not cache content which varnish has marked uncachable if (!obj.cacheable) { pass; } # Do not cache content which has a cookie set if (obj.http.Set-Cookie) { pass; } # Do not cache content with cache control headers set if (obj.http.Pragma ~ "no-cache" || obj.http.Cache-Control ~ "no-cache" || obj.http.Cache-Control ~ "private") { pass; } if (obj.http.Cache-Control ~ "max-age") { unset obj.http.Set-Cookie; deliver; } pass; } ``` ## HAProxy HAProxy is a high performance TCP/HTTP load balancer. It can be used to load balance almost any type of TCP connection, although I have only used it with HTTP connections. We will be using HAProxy to balance connections over multiple thin instances. HAProxy is also available in EPEL. My HAProxy configuration is as follows: ```haproxy global daemon log 127.0.0.1 local0 maxconn 4096 nbproc 1 chroot /var/lib/haproxy user haproxy group haproxy defaults mode http clitimeout 60000 srvtimeout 30000 timeout connect 4000 option httpclose option abortonclose option httpchk option forwardfor balance roundrobin stats enable stats refresh 5s stats auth admin:123abc789xyz listen thin 127.0.0.1:8080 server thin 10.10.10.2:2010 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2011 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2012 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2013 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2014 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2015 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2016 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2017 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2018 weight 1 minconn 3 maxconn 6 check inter 20000 server thin 10.10.10.2:2019 weight 1 minconn 3 maxconn 6 check inter 20000 ``` ## Thin My Thin server is running on a separate Gentoo box. I installed Thin using the package in Portage. To configure Thin, I used the following command: `thin config -C /etc/thin/config-name.yml -c /srv/myapp --servers 10 -e production -p 2010` This configures thin to start 10 servers, listening on port 2010 to 2019. If you want an init script for Thin, so you can start it at boot, run `thin init` This is will create the init script, and you can set it to start up at boot using the normal method (`rc-update add thin default` or `chkconfig thin on`). You should now be able to reach your rails app through `http://nginx.servers.ip.address`. Next, we must configure the static webserver. ## Lighttpd I decided to go with Lighttpd as it is a fast, stable and lightweight webserver which will do the job perfectly with little configuration. You could also use nginx as the static server instead of using lighttpd, but I decided to separate it. I decided to use the package from EPEL for Lighttpd, and found that most of the default configuration was as I wanted it to be. The only thing I needed to change was the port and address the server was listening on: ```bash server.port = 8081 server.bind = "127.0.0.1" ``` And that's pretty much it! Now you just have to dump any static content into /var/www/lighttpd/ (the default location that the Lighttpd package in EPEL is configured to use) and reference any static links using "/static/document_path_of_file", such as if I put an image into /var/www/lighttpd/images/ called "bg.png", I can reach it using `http://servers_hostname/static/images/bg.png`. I have not really done any performance tests on how well this works, and there are probably many things which I could have done better. This was mainly an experiment, so I am always looking for feedback or tips on how to make this better, so please do contact me if you have any suggestions! 🙂 -------------------------------------------------------------------------------- title: "About Me" lastmod: "2021-05-16" -------------------------------------------------------------------------------- I'm Hamzah, and I'm a Linux Systems Administrator / "DevOps Engineer" by day. I'm a Londoner and a father of three. This site serves as my blog, which I post my thoughts and experiences with technology I am playing with. The best ways to contact me would be either on [Mastodon](https://intahnet.co.uk/web/@mhamzahkhan), [Matrix](https://matrix.to/#/@mhamzahkhan:intahnet.co.uk) or by Email. My email address is as follows: - name = hamzah - domain = hamzahkhan.com - \${name}@\${domain}