[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"$ffOHRHhtdBxlOo8DtIVnmj8e9Ljc7q53vwWy1J6gfFrM":3},{"posts":4,"total":208,"page":209,"limit":210},[5,76,84,92,111,119,127,135,143,176,184,192,200],{"id":6,"title":7,"teaser":8,"body":9,"slug":10,"date":11,"tags":12},"6cc5d2e4-f5df-4de9-95ed-c4b8538c87ca","Running 12 services on two VPS boxes in Germany - my self-hosted infrastructure","I run my entire digital life on two Contabo VPS servers in Germany. Personal site, photography portfolio, business site, email, password vault, analytics, monitoring - all of it. No AWS. No Vercel.…","\u003Cp>I run my entire digital life on two Contabo VPS servers in Germany. Personal site, photography portfolio, business site, email, password vault, analytics, monitoring - all of it. No AWS. No Vercel. No managed anything. Just Docker, Traefik, and a lot of evenings spent getting things right.\u003C\u002Fp>\u003Cp>This is not a tutorial. This is a walkthrough of what I actually built, why I built it this way, and what I learned along the way.\u003C\u002Fp>\u003Ch2>The servers\u003C\u002Fh2>\u003Cp>Two VPS boxes at Contabo, both in Germany. That matters to me - European hosting, European jurisdiction, GDPR by default.\u003C\u002Fp>\u003Cp>VPS1 is the jump server. It runs GitLab and a self-hosted GitHub Actions runner. Every deployment to production goes through this machine. It is the gatekeeper.\u003C\u002Fp>\u003Cp>VPS2 is where everything lives. At last count, it runs north of 50 Docker containers across 13 distinct services. Traefik v3 sits at the front, handling routing, SSL termination, and automatic Let's Encrypt certificate renewal for every domain.\u003C\u002Fp>\u003Cp>Both servers were recently upgraded from Ubuntu 20.04 to 22.04 - a process I documented across 12 sessions in the contabo-infrastructure repo. Ubuntu 20.04 had reached end of life, and running an EOL operating system under a stack this size was not something I was comfortable with.\u003C\u002Fp>\u003Ch2>The stack at a glance\u003C\u002Fh2>\u003Cp>Here is what runs on VPS2, broken down by service:\u003C\u002Fp>\u003Cp>\u003Cstrong>madsnorgaard.net\u003C\u002Fstrong> - my personal site. A Nuxt 3 SSR frontend that pulls content from two separate headless CMS backends simultaneously. Writing, projects, CV, and about pages come from a headless Drupal 11 instance via JSON:API. Photography and stories come from photo.madsnorgaard.net via the WordPress REST API. One frontend, two content sources. IBM Plex Mono, dark terminal aesthetic.\u003C\u002Fp>\u003Cp>\u003Cstrong>drupal.madsnorgaard.net\u003C\u002Fstrong> - headless Drupal 11. This is where I write articles, manage project pages, and maintain the content that feeds into the Nuxt frontend. It runs its own Docker stack with MySQL, Redis for caching, and Apache Solr for search. The site itself is built from the minimal install profile - no unnecessary modules, no bloat. Composer manages everything. GitHub Actions handles CI\u002FCD with PHP CodeSniffer checking custom modules and themes on every push.\u003C\u002Fp>\u003Cp>\u003Cstrong>photo.madsnorgaard.net\u003C\u002Fstrong> - the photography backend. WordPress 6.x running as a headless REST API for my documentary photography archive. Custom post types for photos, stories, and projects. Custom taxonomies for series and subjects. A security plugin that blocks XML-RPC, removes user enumeration endpoints, rate-limits the API, and adds security headers. The Nuxt frontend at madsnorgaard.net consumes this API alongside Drupal's JSON:API - that dual-source architecture is one of the things I am most pleased with in this whole setup.\u003C\u002Fp>\u003Cp>\u003Cstrong>fenixnordic.solutions\u003C\u002Fstrong> - the marketing site for Fenix Nordic Solutions, the side agency Phoenix and I run together. Nuxt 3 static-generated, bilingual English and Danish, with a dark Nordic editorial design. Custom cursor with ember particles, magnetic button effects, scroll reveal animations. It started as a Drupal site but I rebuilt it as a pure Nuxt frontend. The old Drupal artefacts are still in the repo - a future cleanup task.\u003C\u002Fp>\u003Cp>\u003Cstrong>theazanianprepper.online\u003C\u002Fstrong> - a WordPress site on Docker with Traefik.\u003C\u002Fp>\u003Cp>\u003Cstrong>cosycreator.online\u003C\u002Fstrong> - a Payload CMS v2 project on Docker with Traefik.\u003C\u002Fp>\u003Cp>\u003Cstrong>Rocket.Chat\u003C\u002Fstrong> - self-hosted team communication. Running MongoDB 6.0, upgraded March 2026.\u003C\u002Fp>\u003Cp>\u003Cstrong>Traefik v3\u003C\u002Fstrong> - the reverse proxy that ties it all together. Every service on VPS2 routes through Traefik. It handles SSL termination with automatic Let's Encrypt certificates, forces HTTPS on all traffic, and routes requests to the right container based on domain name. The dashboard is accessible internally via SSH tunnel - no public exposure. The configuration is public on GitHub. Adding a new service is a matter of Docker Compose labels - no nginx configs to edit, no manual certificate management.\u003C\u002Fp>\u003Cp>\u003Cstrong>Monitoring\u003C\u002Fstrong> - Grafana, Prometheus, Loki, AlertManager, cAdvisor, Node Exporter, and Blackbox Exporter. Four provisioned dashboards covering site uptime, infrastructure metrics, logs, and Traefik request analytics. Blackbox Exporter probes eight sites for uptime and SSL certificate expiry. AlertManager routes critical alerts to email via Mailgun. And then there is the webhook bridge - a custom Python service that automatically creates GitHub issues when an alert fires and closes them when the alert resolves. That one took some debugging to get the deduplication right, but it means I never miss an infrastructure problem.\u003C\u002Fp>\u003Cp>\u003Cstrong>Mail\u003C\u002Fstrong> - docker-mailserver 14 with Postfix, Dovecot, and Rspamd. Roundcube webmail at webmail.madsnorgaard.net. traefik-certs-dumper handles the TLS certificates. fail2ban protects against brute force attempts. I scored 10\u002F10 on mail-tester.com with this setup - SPF, DKIM, DMARC, rDNS, all of it configured correctly. Self-hosted email that actually works and does not end up in spam folders.\u003C\u002Fp>\u003Cp>\u003Cstrong>Vaultwarden\u003C\u002Fstrong> - a Bitwarden-compatible password vault. This one gets special treatment in the deployment pipeline. No auto-deploy. Manual approval required every time. Contabo snapshot mandatory before any change. When the service that stores every password you own goes down because of a bad deploy, you are locked out of everything. I treat it accordingly.\u003C\u002Fp>\u003Cp>\u003Cstrong>Plausible CE v2\u003C\u002Fstrong> - self-hosted analytics replacing Google Analytics across all my sites. Privacy-respecting, no cookies, no consent banners needed. The Grafana monitoring stack pulls data from Plausible via an Infinity datasource, so I can see analytics alongside infrastructure metrics in the same dashboard.\u003C\u002Fp>\u003Ch2>The deployment architecture\u003C\u002Fh2>\u003Cp>This is where it gets interesting. Every service deploys through the same pipeline, but the mechanism is not what you might expect.\u003C\u002Fp>\u003Cp>Each service has its own GitHub repository. When I push to main, GitHub Actions runs CI checks - linting, CodeSniffer, Composer validation, whatever is relevant for that project. If the checks pass, the workflow fires a \u003Ccode>repository_dispatch\u003C\u002Fcode> event to the contabo-infrastructure repo.\u003C\u002Fp>\u003Cp>The contabo-infrastructure repo has a self-hosted GitHub Actions runner on VPS1. When it receives the dispatch event, it checks out the source repo, rsyncs the relevant files to VPS2 via SSH, builds the Docker images on VPS2, and restarts the containers. VPS2 does not need GitHub SSH access - all files are pushed from the runner on VPS1.\u003C\u002Fp>\u003Cp>The result: I push code, and a few minutes later it is live. No manual SSH. No FTP. No cowboy deployments. The contabo-infrastructure repo has seen 315 production deployments at the time of writing.\u003C\u002Fp>\u003Cp>Critical services like Vaultwarden break this pattern deliberately. They require manual workflow triggers with reviewer approval. You do not auto-deploy your password vault.\u003C\u002Fp>\u003Ch2>The monitoring layer\u003C\u002Fh2>\u003Cp>I spent a lot of time on observability because running this many services without it is asking for trouble.\u003C\u002Fp>\u003Cp>Prometheus scrapes metrics from every container via cAdvisor, from the host via Node Exporter, and from external endpoints via Blackbox Exporter. Loki aggregates logs from all containers via Promtail. Grafana displays everything across four provisioned dashboards.\u003C\u002Fp>\u003Cp>The alert rules are straightforward: CPU warning at 85 percent for five minutes, critical at 95 percent for two minutes. Memory warning at 85 percent. Disk warning at 80 percent, critical at 90 percent. Container down if not seen for two minutes.\u003C\u002Fp>\u003Cp>When an alert fires, AlertManager sends an email via Mailgun and the webhook bridge creates a GitHub issue in the contabo-infrastructure repo with all the alert details. When the alert resolves, the bridge comments on the issue and closes it. The entire incident lifecycle is tracked in GitHub Issues without me lifting a finger.\u003C\u002Fp>\u003Cp>There is a caveat with cAdvisor - it is pinned to v0.47.2. The latest version causes roughly 12.8 percent constant CPU overhead, which on a VPS running this many containers is unacceptable. I update the version deliberately after testing, not automatically.\u003C\u002Fp>\u003Cp>Loki recently completed the upgrade path from 1.6.1 all the way to 3.0.0 - stepping through 2.0.x and 2.9.x along the way, with verified Contabo snapshots at each stage. It was a careful process but the result is a modern log aggregation stack that I no longer have to worry about being left behind on.\u003C\u002Fp>\u003Ch2>Self-hosted email - the achievement I am most proud of\u003C\u002Fh2>\u003Cp>Getting email right is notoriously difficult. Getting it right on a self-hosted stack with perfect deliverability is the kind of challenge that makes most people give up and use a managed service.\u003C\u002Fp>\u003Cp>The stack is docker-mailserver 14 - Postfix for SMTP, Dovecot for IMAP, Rspamd for spam filtering. Roundcube provides webmail. traefik-certs-dumper pulls TLS certificates from the Traefik ACME store so the mail server uses the same Let's Encrypt certificates as everything else. fail2ban watches for brute force attempts with custom jail configuration for Docker subnet awareness.\u003C\u002Fp>\u003Cp>DNS records include SPF, DKIM (generated by the setup script), DMARC, MX, and a PTR record configured through Contabo's panel. Every piece has to be correct or your mail ends up in spam.\u003C\u002Fp>\u003Cp>I ran the full suite through mail-tester.com and scored 10\u002F10. That is SPF aligned, DKIM signed, DMARC enforced, reverse DNS matching, no blacklists, proper HELO hostname. The works.\u003C\u002Fp>\u003Cp>Using mads@madsnorgaard.net as my primary email address, hosted on my own server, with full control over every aspect of the stack - that aligns with everything I believe about digital sovereignty.\u003C\u002Fp>\u003Ch2>Why self-host everything\u003C\u002Fh2>\u003Cp>I wrote previously about saying goodbye to PayPal and American service providers. This infrastructure is the practical expression of that conviction.\u003C\u002Fp>\u003Cp>Every service I self-host is a service where I control the data, the configuration, the uptime, and the terms. Nobody can freeze my account. Nobody can change the pricing. Nobody can decide my data belongs to them for training purposes. Nobody can shut down the product and leave me scrambling for alternatives.\u003C\u002Fp>\u003Cp>It costs time. Significantly more time than paying for managed services. The Ubuntu upgrade across both servers took 12 documented sessions. The mail server setup took a solid week of evenings. The monitoring stack has been an ongoing project for months.\u003C\u002Fp>\u003Cp>But the trade-off is sovereignty. I know exactly what runs on my servers. I know exactly where my data lives. I can audit every line of configuration. And when something breaks - and things do break - I can fix it myself without waiting for a support ticket response from a company that may or may not care about my specific problem.\u003C\u002Fp>\u003Ch2>The numbers\u003C\u002Fh2>\u003Cp>Across VPS2: 13 distinct service stacks. North of 50 Docker containers. Eight sites monitored for uptime. Four Grafana dashboards provisioned from git. 315 production deployments via the contabo-infrastructure pipeline. 65 release tags on the madsnorgaard.net repo alone. 12 release tags on the drupal.madsnorgaard.net repo. 38 release tags on the fenixnordic.solutions repo. 10\u002F10 mail-tester.com score.\u003C\u002Fp>\u003Cp>All of it running on two VPS boxes in a German data centre. No cloud provider lock-in. No vendor dependencies I cannot replace. Open source by default.\u003C\u002Fp>\u003Ch2>What is next\u003C\u002Fh2>\u003Cp>There are things I want to improve. The fenixnordic.solutions repo still has old Drupal artefacts that should be cleaned out. I want to add more Plausible integration into the Grafana dashboards. The cAdvisor version pinning needs revisiting - newer releases may have resolved the CPU overhead issue that forced me to stay on v0.47.2.\u003C\u002Fp>\u003Cp>And I keep thinking about documenting this whole setup more publicly. Not just the individual repos - several of those are already public - but the architecture as a whole. How the pieces connect. How the deployment pipeline works end to end. How monitoring feeds back into the development workflow via GitHub Issues.\u003C\u002Fp>\u003Cp>Because the thing I hear most often when I share this work is: \"I did not know you could do all that on two VPS boxes.\" You can. It takes time, patience, and a willingness to read documentation at midnight. But you absolutely can.\u003C\u002Fp>\u003Cp>Self-hosted whenever I can. European where it makes sense. Open source by default.\u003C\u002Fp>\u003Cp>\u003Cem>Infrastructure repos: \u003C\u002Fem>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Ftraefik\">\u003Cem>traefik\u003C\u002Fem>\u003C\u002Fa>\u003Cem> · \u003C\u002Fem>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Fplausible\">\u003Cem>plausible\u003C\u002Fem>\u003C\u002Fa>\u003Cem> · \u003C\u002Fem>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Ffenixnordic.solutions\">\u003Cem>fenixnordic.solutions\u003C\u002Fem>\u003C\u002Fa>\u003Cem> · \u003C\u002Fem>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Fmadsnorgaard.net\">\u003Cem>madsnorgaard.net\u003C\u002Fem>\u003C\u002Fa>\u003C\u002Fp>","running-12-services-two-vps-boxes-germany-my-self-hosted-infrastructure","2026-04-03T11:01:42+00:00",[13,16,19,22,25,28,31,34,37,40,43,46,49,52,55,58,61,64,67,70,73],{"id":14,"name":15,"slug":15},"d965487f-3728-4202-86de-5b2a421c60a4","Self-hosting",{"id":17,"name":18,"slug":18},"cd4d8de4-8189-439a-92d1-52d8b9178351","Docker",{"id":20,"name":21,"slug":21},"b036641d-390a-413b-b911-5684b9cf2860","Traefik",{"id":23,"name":24,"slug":24},"db8b4a3f-d438-4953-94ef-71a9e8a77cea","VPS",{"id":26,"name":27,"slug":27},"2fc2f328-ec87-4cae-b18d-c0dba26d3938","Contabo",{"id":29,"name":30,"slug":30},"9ce82c64-b167-4baa-870e-b17ca303f357","infrastructure",{"id":32,"name":33,"slug":33},"49f2de4c-0b5c-4736-9470-1ff09e7a57b5","DevOps",{"id":35,"name":36,"slug":36},"be2d7dd2-5cf1-4b35-89d8-a8e456b3ad50","monitoring",{"id":38,"name":39,"slug":39},"9b49265e-11c3-497d-9dc6-fcf72529d6d5","Grafana",{"id":41,"name":42,"slug":42},"bfac572a-a9d6-48a5-9204-ae809f58e232","Prometheus",{"id":44,"name":45,"slug":45},"877c06ef-54af-4b16-83dd-d0dba7ccaad9","digital sovereignty",{"id":47,"name":48,"slug":48},"ec7b1417-6d0a-4518-b726-4bd2a0764356","open source",{"id":50,"name":51,"slug":51},"98c95e6b-edb6-4a31-a5a7-83531bb0b35b","Linux",{"id":53,"name":54,"slug":54},"85d7d938-8e2e-4d27-902e-6c08d10928bd","Nuxt",{"id":56,"name":57,"slug":57},"1ab21e16-a352-4411-b33d-b423729af017","Drupal",{"id":59,"name":60,"slug":60},"be16e000-cc9c-4531-8bda-4a2908eae6fd","WordPress",{"id":62,"name":63,"slug":63},"618fcaff-3861-4bdc-9a6e-29fde8d30d95","email",{"id":65,"name":66,"slug":66},"d1d49fd0-0dc5-4d6f-ad78-2fe03b559a2a","Vaultwarden",{"id":68,"name":69,"slug":69},"1e2953cf-bab4-42ea-95e5-11182922a268","Plausible",{"id":71,"name":72,"slug":72},"89071faf-ca5c-42d6-822e-555a2051d86e","CI\u002FCD",{"id":74,"name":75,"slug":75},"4f98c9bf-dee5-404a-8b6f-17c4c1e88da5","GitHub Actions",{"id":77,"title":78,"teaser":79,"body":80,"slug":81,"date":82,"tags":83},"a36d6b69-bcfc-4328-a2b7-4e193fcb0fa1","Magenta audited 20 OS2 products. Here is what they found.","In May 2025, Magenta published an open source analysis of every OS2 product. Not a gentle overview - a proper audit. Commits counted, Docker Hub versions cross-referenced with GitHub, proprietary…","\u003Cp>In May 2025, Magenta published an open source analysis of every OS2 product. Not a gentle overview - a proper audit. Commits counted, Docker Hub versions cross-referenced with GitHub, proprietary dependencies exposed, documentation quality assessed. The report runs to 60 pages and it does not pull punches.\u003C\u002Fp>\u003Cp>I have read the whole thing. And honestly, some of it is hard to take in.\u003C\u002Fp>\u003Ch2>Full disclosure\u003C\u002Fh2>\u003Cp>I need to be upfront about something. I built OS2udoglær while working as Tech Lead at Novicell. That project shows up in the report with mostly favourable marks. So yes - I have skin in this game. I am not going to pretend otherwise.\u003C\u002Fp>\u003Cp>But the data in this report is publicly verifiable. Anyone can go to GitHub and count the commits, check the README files, look at the Docker Hub tags. Magenta did the work of compiling it all in one place, and regardless of what you think about their motivations, the numbers are the numbers.\u003C\u002Fp>\u003Ch2>What the report actually says\u003C\u002Fh2>\u003Cp>Magenta evaluated all 20 OS2 products across four categories: structured releases and source code access, open code libraries, documentation and README quality, and changelogs. They used a simple traffic light system - green, yellow, red.\u003C\u002Fp>\u003Cp>The headline finding is damning. Two products - OS2flytjord and OS2nectar - have no public source code at all. OS2flytjord has a GitHub repository that contains nothing but empty template files. OS2nectar does not appear to have a repository anywhere. The link on OS2's own website leads to the wrong project entirely.\u003C\u002Fp>\u003Cp>But it gets worse than missing code. Several products depend on proprietary Bootstrap themes - \"Angle\" and \"Inspinia\" - that require paid licences. OS2sofd, OS2kravmotor, OS2rollekatalog, and OS2korrespondance all use these. That means you cannot freely use, modify, and redistribute these solutions. The fundamental premise of open source is broken.\u003C\u002Fp>\u003Cp>Then there is the development transparency problem. The report documents how most OS2 products are developed privately, with large code dumps pushed to GitHub at irregular intervals. OS2sync has had 18 commits since October 2019 - and those commits are just labelled \"release\" with a version number. No description, no changelog, no way to understand what changed. Meanwhile, Docker Hub shows versions of OS2sync that are five versions ahead of what is on GitHub. You literally cannot inspect the code that is running in production.\u003C\u002Fp>\u003Ch2>The numbers that matter\u003C\u002Fh2>\u003Cp>Magenta tracked commits and releases across all products over six years. The disparity is staggering.\u003C\u002Fp>\u003Cp>OS2mo has 967 releases and nearly 10,000 commits. OS2borgerPC has 236 releases and 3,155 commits. Together, these two Magenta products account for 74 percent of all releases across the entire OS2 ecosystem. The remaining 18 products share the other 26 percent.\u003C\u002Fp>\u003Cp>On the commits side, OS2kitos leads with 14,488 commits - transparent, frequent, small changes. Add OS2mo and OS2borgerPC, and three products account for 78 percent of all commits.\u003C\u002Fp>\u003Cp>The rest? OS2kravmotor has had 8 commits total - all in November 2022. OS2skoledata has had 7 commits since February 2023. OS2sofd has had 6 commits since October 2022. These are not active open source projects. They are occasional code drops with an MPL-2.0 licence file attached.\u003C\u002Fp>\u003Ch2>Where OS2udoglær fits\u003C\u002Fh2>\u003Cp>OS2udoglær received green marks for open code libraries (no proprietary dependencies) and documentation quality. Yellow for releases and changelogs - fair criticism, there is room for improvement on both fronts.\u003C\u002Fp>\u003Cp>What the report highlights positively: 1,121 commits since April 2024 with short intervals and small code changes. A structured Git strategy with feature and bugfix branches named after issues. Detailed README files with setup guides for both DDEV and LAMP stack. GitHub Actions for CI\u002FCD. No dependencies on closed software.\u003C\u002Fp>\u003Cp>That is how I was taught - by doing, by the community, by years of contributing to Drupal.org - that open source projects should work. Frequent commits. Clear documentation. No proprietary lock-in. A development process that someone outside the organisation can actually follow and contribute to.\u003C\u002Fp>\u003Cp>The fact that this is considered noteworthy rather than standard practice is the real problem.\u003C\u002Fp>\u003Ch2>The OS2 ecosystem problem\u003C\u002Fh2>\u003Cp>OS2 was founded in 2012 with the right intentions. A collaborative framework for Danish municipalities to develop and share open source solutions. The principles are sound - openness, transparency, community-driven development, no vendor lock-in.\u003C\u002Fp>\u003Cp>But the report exposes a fundamental gap between those principles and what actually happens. OS2 lists 64 suppliers on its website. The reality? Eight. Seven companies plus ITK-Development from Aarhus Municipality. The ratio between listed and actual suppliers is close to 10:1.\u003C\u002Fp>\u003Cp>Digital Identity alone has nine OS2 products and received critical ratings across nearly all of them. Products where the code on GitHub does not match what is in production. Products with proprietary dependencies. Products with no meaningful documentation. Products where the development process is invisible to anyone outside the vendor.\u003C\u002Fp>\u003Cp>Magenta calls this \"open source washing\" - using the open source brand to promote products that do not meet the standards. And they are right. When OS2 puts its stamp on a solution that has no public source code, it undermines every developer and every company that actually invests in doing open source properly.\u003C\u002Fp>\u003Ch2>Why this matters beyond Denmark\u003C\u002Fh2>\u003Cp>This is not just a Danish problem. Governments across Europe are increasingly turning to open source for public sector digitalization. The EU is pushing for digital sovereignty. Open source is positioned as the answer to vendor lock-in, to security through transparency, to cost-effective innovation.\u003C\u002Fp>\u003Cp>But if the organisations meant to champion open source cannot enforce their own standards, then the label becomes meaningless. And that hurts everyone - the municipalities who think they are getting the benefits of open source, the vendors who actually invest in transparency, and the broader ecosystem that depends on trust.\u003C\u002Fp>\u003Cp>I wrote about this back in 2024 for the OS2 magazine. The core message has not changed: open source is not just a licence. It is a practice. It is how you develop, how you document, how you release, how you collaborate. Skip any of those steps and you have code with a licence file - not an open source project.\u003C\u002Fp>\u003Ch2>What I think should happen\u003C\u002Fh2>\u003Cp>OS2 needs to enforce its own standards. If a product does not have publicly accessible source code, it should not carry the OS2 label. If the code on GitHub is five versions behind what is in production, that is not open source. If a product depends on proprietary licences, that needs to be disclosed and addressed.\u003C\u002Fp>\u003Cp>The report provides a clear framework - four categories, measurable criteria, publicly verifiable data. OS2 could adopt something similar as a minimum quality bar.\u003C\u002Fp>\u003Cp>And for those of us building and maintaining OS2 solutions - the bar should be higher than \"not the worst.\" OS2udoglær needs better changelogs and proper release tags. I know that. The report is right to flag it. Open source is a continuous practice, not a checkbox.\u003C\u002Fp>\u003Ch2>The Drupal connection\u003C\u002Fh2>\u003Cp>OS2udoglær is built on Drupal. So is OS2forms. Both show up among the better-performing products in this report. That is not a coincidence.\u003C\u002Fp>\u003Cp>The Drupal community has decades of established practices around open source development - contribution guidelines, coding standards, structured release processes, public issue queues, transparent development. When you build a product on Drupal and follow those community practices, you inherit a culture of openness that translates directly into the kind of metrics this report measures.\u003C\u002Fp>\u003Cp>I have been working with Drupal for over 15 years. Every project I touch carries those practices forward - not because someone mandates it, but because the community taught me that this is how you build software that lasts and that others can actually use.\u003C\u002Fp>\u003Ch2>Here is the thing\u003C\u002Fh2>\u003Cp>Magenta wrote this report because they have been trying to raise these issues internally with OS2 for years and were ignored. You can agree or disagree with their approach of going public. But the data is there, the methodology is transparent, and the findings are verifiable.\u003C\u002Fp>\u003Cp>The full report is available at magenta.dk. Read it. Check the numbers against GitHub yourself. Form your own conclusions.\u003C\u002Fp>\u003Cp>Open source in the Danish public sector deserves better than what most of the OS2 ecosystem currently delivers. And the people building the exceptions - at Magenta, at ITK-Development, at Novicell, at Netcompany on OS2kitos - deserve recognition for doing the work properly.\u003C\u002Fp>\u003Cp>The rest need to step up. Or OS2 needs to stop pretending.\u003C\u002Fp>\u003Chr>\u003Cp>\u003Cem>The Magenta report: \u003C\u002Fem>\u003Ca href=\"https:\u002F\u002Fwww.magenta.dk\u002Fwp-content\u002Fuploads\u002F2025\u002F05\u002FOpen-source-produktanalyse-maj-2025.pdf\">\u003Cem>Open source produktanalyse, maj 2025\u003C\u002Fem>\u003C\u002Fa>\u003C\u002Fp>\u003Cp>\u003Cem>Full disclosure: I developed OS2udoglær while employed as Tech Lead - Drupal at Novicell (2022-2025). I currently work as Senior Developer and DevOps Engineer at Eksponent.\u003C\u002Fem>\u003C\u002Fp>","magenta-audited-20-os2-products-here-what-they-found","2026-04-03T10:52:42+00:00",[],{"id":85,"title":86,"teaser":87,"body":88,"slug":89,"date":90,"tags":91},"b32c8e51-1137-4bad-9b1a-902838b9c095","When Time Stands Still: From Afrapix Archives to Living Digital History","A Moment Frozen in FilmThere is a photograph that has stayed with me for years - a brilliant capture by Cedric Nunn from the Afrapix Cultural Calendar 1989. The image shows performers mid-leap,…","\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">A Moment Frozen in Film\u003C\u002Fh2>\u003Cimg data-entity-uuid=\"7a01eec2-ee50-458c-a329-8e874dd5ec28\" data-entity-type=\"file\" src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002FAfrapix%20Cultural%20Calendar%201989.png\" width=\"512\" height=\"389\" loading=\"lazy\">\u003Cp class=\"whitespace-normal break-words\">There is a photograph that has stayed with me for years - a brilliant capture by Cedric Nunn from the Afrapix Cultural Calendar 1989. The image shows performers mid-leap, suspended in an impossible moment of joy and movement against a painted theatrical backdrop. It is one of those photographs where time does not just pause; it holds its breath.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">I first encountered this image while working on the Afrapix feature for South African History Online (SAHO), digitising and preserving the extraordinary photographic archive that documented South Africa's struggle years and cultural resistance. Cedric Nunn, one of South Africa's most significant documentary photographers, had this remarkable ability to capture not just events, but the spirit of a moment - the energy, the defiance, the celebration that pulsed through communities during apartheid.\u003C\u002Fp>\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">Cape Town and the Weight of History\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">Working with these archives in Cape Town was its own temporal experience. There is something about that city - perhaps it is the mountain that seems to watch over everything, unchanging while the city churns below, or maybe it is the layers of history visible in every neighbourhood transition. Time moves differently there. You can stand at the District Six Museum and feel decades collapse into a single point of grief and memory. You can walk through the Company's Garden and feel centuries breathing through the oak trees.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">Processing thousands of Afrapix images, each one a fragment of frozen time, created this strange temporal vertigo. Here was 1976 bleeding into 1985, flowing into 1990. Photographers like Nunn, Omar Badsha, Paul Weinberg, and others had not just documented history - they had caught time in bottles, preserving the exact texture of light on a particular afternoon, the precise angle of a raised fist, the specific joy of a cultural celebration that refused to be diminished by oppression.\u003C\u002Fp>\u003Cimg data-entity-uuid=\"3eb477ba-bc07-450a-b24a-5d3cbcfc4d85\" data-entity-type=\"file\" src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002FAfrapix%20Members.jpg\" width=\"480\" height=\"323\" loading=\"lazy\">\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">From Archive to Active Memory\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">This relationship with time - how we capture it, organise it, and make it accessible - brings me to SAHO's newest feature. After years of working with temporal data in these historical archives, we have launched a comprehensive Events platform that bridges past and future, archive and action.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">The new Events section is not just a calendar - it is a living timeline where South African history continues to unfold. Built with Drupal's powerful content management capabilities, it handles the complex temporal states that real cultural events demand. A conference can be \"happening now\" while still having \"upcoming\" sessions. An exhibition exists simultaneously in multiple temporal dimensions - as a current experience for today's visitors and as tomorrow's historical record.\u003C\u002Fp>\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">The Technical Poetry of Time\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">For those interested in the technical side: we are implementing what is called bitemporal modelling. This means tracking not just when events actually happen (valid time) but also when that information enters our system (transaction time). This matters enormously for historical accuracy. When someone discovers documentation about a 1960s protest that was previously unrecorded, we can add it to the historical record while preserving the metadata about when this information came to light.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">The platform uses custom Drupal entities with sophisticated temporal attributes, allowing events to exist in multiple states simultaneously. It is not just about \"past,\" \"present,\" or \"future\" - it is about understanding time as a continuous flow where history is constantly being made, documented, revised, and understood anew.\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002Fevents%20overview.jpg\" data-entity-uuid=\"b5d81cce-f801-4831-aef1-d23842006faa\" data-entity-type=\"file\" width=\"1628\" height=\"1179\" loading=\"lazy\">\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">Your History, Our Collective Memory\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">But here is what matters most: \u003Cstrong>this platform needs you\u003C\u002Fstrong>.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">Just as the Afrapix photographers understood that history is not made by famous figures alone but by communities, cultural workers, artists, and ordinary people doing extraordinary things, SAHO's Events platform is designed to be fed by collective contribution.\u003C\u002Fp>\u003Cp class=\"whitespace-pre-wrap break-words\">Are you organising a heritage walk? Add it. Hosting a discussion on decolonisation? List it. Planning an exhibition of struggle posters? Document it. Discovered information about a historical event? Share it.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">Every event you add becomes part of the living archive. Today's community gathering is tomorrow's historical record. That workshop on oral history methods, that book launch about forced removals, that performance piece about identity - they all matter. They are all part of the continuous story.\u003C\u002Fp>\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">Time as a Collaborative Canvas\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">Cedric Nunn's photograph from 1989 captures dancers mid-leap, but it also captures something else - the understanding that cultural expression itself is an act of resistance and documentation. Every event we create, attend, and document is another frame in the endless film of our collective history.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">The new SAHO Events feature is not just about scheduling - it is about recognising that we are all archivists of the present moment. We are all photographers freezing time, even if our camera is just a keyboard and a submission form.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">Visit \u003Ca class=\"underline\" href=\"https:\u002F\u002Fsahistory.org.za\u002Fall-upcoming-events\">sahistory.org.za\u002Fevents\u003C\u002Fa> to explore upcoming events and, more importantly, to add your own. Because in the end, history is not just about what happened - it is about what is happening, what will happen, and our collective responsibility to ensure these moments are not lost to time.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">As I learned from those long days with the Afrapix archive, surrounded by Cedric Nunn's brilliant captures and the work of so many other documentary photographers: every moment has the potential to be history. The question is - will we preserve it?\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">&nbsp;\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002Fscreenshot-2025-11-11_11-17-02_0.png\" data-entity-uuid=\"8a036bb9-9f0e-48ab-80e2-7d88273307d5\" data-entity-type=\"file\" width=\"1491\" height=\"982\" loading=\"lazy\">","when-time-stands-still-afrapix-archives-living-digital-history","2025-11-11T10:32:50+00:00",[],{"id":93,"title":94,"teaser":95,"body":96,"slug":97,"date":98,"tags":99},"f353d484-15a4-441a-b7d9-5c2166261943","Getting Omarchy Running: What I Actually Did","Look, I just installed Omarchy on my laptop. Fresh. And yeah, it's amazing out of the box, but let's be honest - you're going to want your own tools. This isn't a generic tutorial. This is what I…","\u003Cp>Look, I just installed Omarchy on my laptop. Fresh. And yeah, it's amazing out of the box, but let's be honest - you're going to want your own tools. This isn't a generic tutorial. This is what I actually did to get from \"nice Linux desktop\" to \"ready to work.\"\u003C\u002Fp>\u003Ch2>What is Omarchy, really?\u003C\u002Fh2>\u003Cp>DHH made it. The Ruby on Rails guy. He got fed up with Apple's App Store nonsense and decided to build his own Linux distribution. Not just scripts - a full operating system that looks nothing like the desktop environments you're used to.\u003C\u002Fp>\u003Cp>Everything happens via keyboard. I mean EVERYTHING. Your mouse basically retires. Sounds weird? It is at first. But then you realize you're flying through tasks.\u003C\u002Fp>\u003Cp>37signals (DHH's company) is moving their entire dev team to this. Over three years, they're ditching Macs completely. Why? Rails tests run twice as fast on Linux. Docker works natively. No virtualization overhead.\u003C\u002Fp>\u003Cp>The shortcuts you need to remember RIGHT NOW:\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ccode>Super + Space\u003C\u002Fcode> - Launch anything\u003C\u002Fli>\u003Cli>\u003Ccode>Super + Alt + Space\u003C\u002Fcode> - The Omarchy Menu (your control panel)\u003C\u002Fli>\u003Cli>\u003Ccode>Super + Return\u003C\u002Fcode> - Terminal\u003C\u002Fli>\u003Cli>\u003Ccode>Super + Esc\u003C\u002Fcode> - Reload everything\u003C\u002Fli>\u003C\u002Ful>\u003Cp>Got it? Good. Let's actually install stuff.\u003C\u002Fp>\u003Ch2>Installing Essential Development Tools\u003C\u002Fh2>\u003Ch3>Setting Up Zsh and Oh My Zsh\u003C\u002Fh3>\u003Cp>Omarchy doesn't come with zsh by default, but it's easy to install and configure:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Install zsh\nsudo pacman -S zsh\n\n# Add zsh to valid shells\ncommand -v zsh | sudo tee -a \u002Fetc\u002Fshells\n\n# Install Oh My Zsh\nsh -c \"$(curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002Fohmyzsh\u002Fohmyzsh\u002Fmaster\u002Ftools\u002Finstall.sh)\"\n\n# Set zsh as default shell (if chsh doesn't work, use the workaround below)\nsudo usermod -s \u002Fusr\u002Fbin\u002Fzsh $USER\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>\u003Cstrong>If shell changing doesn't work\u003C\u002Fstrong>, add this to your \u003Ccode>~\u002F.bashrc\u003C\u002Fcode> as a workaround:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">echo 'exec zsh' &gt;&gt; ~\u002F.bashrc\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Installing Oh My Zsh Plugins\u003C\u002Fh3>\u003Cp>Enhance your terminal experience with these essential plugins:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Autosuggestions (suggests commands as you type)\ngit clone https:\u002F\u002Fgithub.com\u002Fzsh-users\u002Fzsh-autosuggestions ${ZSH_CUSTOM:-~\u002F.oh-my-zsh\u002Fcustom}\u002Fplugins\u002Fzsh-autosuggestions\n\n# Syntax highlighting (colors your commands)\ngit clone https:\u002F\u002Fgithub.com\u002Fzsh-users\u002Fzsh-syntax-highlighting.git ${ZSH_CUSTOM:-~\u002F.oh-my-zsh\u002Fcustom}\u002Fplugins\u002Fzsh-syntax-highlighting\n\n# Enable plugins - quick one-liner method\nsed -i 's\u002Fplugins=(git)\u002Fplugins=(git zsh-autosuggestions zsh-syntax-highlighting sudo colored-man-pages command-not-found)\u002F' ~\u002F.zshrc\n\n# Reload configuration\nsource ~\u002F.zshrc\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>\u003Cstrong>Plugin features:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>zsh-autosuggestions\u003C\u002Fstrong> - Press → to accept suggestions\u003C\u002Fli>\u003Cli>\u003Cstrong>zsh-syntax-highlighting\u003C\u002Fstrong> - Red = invalid command, green = valid\u003C\u002Fli>\u003Cli>\u003Cstrong>sudo\u003C\u002Fstrong> - Double-tap ESC to add sudo to previous command\u003C\u002Fli>\u003Cli>\u003Cstrong>colored-man-pages\u003C\u002Fstrong> - Colorful, readable man pages\u003C\u002Fli>\u003C\u002Ful>\u003Ch3>Installing IDEs and Editors\u003C\u002Fh3>\u003Ch4>VSCode\u003C\u002Fh4>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">yay -S visual-studio-code-bin\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Or use the Omarchy Menu: \u003Ccode>Super + Alt + Space\u003C\u002Fcode> → Install → Editor → VSCode\u003C\u002Fp>\u003Ch4>Setting Default Editor\u003C\u002Fh4>\u003Cp>After installing your preferred editor, set it as default:\u003C\u002Fp>\u003Col>\u003Cli>Press \u003Ccode>Super + Alt + Space\u003C\u002Fcode>\u003C\u002Fli>\u003Cli>Navigate to Setup → Defaults\u003C\u002Fli>\u003Cli>Edit the UWSM defaults file\u003C\u002Fli>\u003Cli>Save and press \u003Ccode>Super + Esc\u003C\u002Fcode> to reload\u003C\u002Fli>\u003C\u002Fol>\u003Ch3>Installing Development Tools\u003C\u002Fh3>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Install ddev (Docker-based development environment)\nyay -S ddev-bin\n\n# Install all at once if you prefer\nyay -S visual-studio-code-bin ddev-bin slack-desktop discord\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Installing Claude Code\u003C\u002Fh3>\u003Cp>Claude Code is an AI-powered coding assistant that works directly in your terminal:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Recommended: Native installer\ncurl -fsSL https:\u002F\u002Fclaude.ai\u002Finstall.sh | bash\n\n# Alternative: via npm\nnpm install -g @anthropic-ai\u002Fclaude-code\n\n# Verify installation\nclaude doctor\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>\u003Cstrong>First-time setup:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">cd ~\u002Fyour-project\nclaude\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>On first run, you'll authenticate through your browser using either Claude Max or Anthropic Console.\u003C\u002Fp>\u003Cp>\u003Cstrong>Note:\u003C\u002Fstrong> Claude Code requires either a Claude Max subscription or API credits. Max is usually more economical for regular use.\u003C\u002Fp>\u003Ch2>Managing Installed Applications\u003C\u002Fh2>\u003Ch3>Finding Installed Packages\u003C\u002Fh3>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Search for specific packages\npacman -Q | grep -i \"search-term\"\n\n# List all installed packages\npacman -Q\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Removing Unwanted Applications\u003C\u002Fh3>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Remove package and unused dependencies\nsudo pacman -Rs package-name\n\n# Example: Remove specific apps\nsudo pacman -Rs 1password-beta 1password-cli\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Using the Omarchy Menu\u003C\u002Fh3>\u003Cp>The easiest way to manage apps:\u003C\u002Fp>\u003Col>\u003Cli>Press \u003Ccode>Super + Alt + Space\u003C\u002Fcode>\u003C\u002Fli>\u003Cli>Navigate to Remove\u003C\u002Fli>\u003Cli>Browse and select apps to uninstall\u003C\u002Fli>\u003C\u002Fol>\u003Ch2>Configuring Multiple Keyboard Layouts\u003C\u002Fh2>\u003Cp>If you work in multiple languages, you'll want to switch between keyboard layouts easily:\u003C\u002Fp>\u003Cp>\u003Cstrong>Edit Hyprland configuration:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">vim ~\u002F.config\u002Fhypr\u002Fhyprland.conf\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>\u003Cstrong>Add\u002Fmodify the input section:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">input {\n    kb_layout = us,dk\n    kb_variant = \n    kb_model =\n    kb_options = grp:alt_shift_toggle\n    # ... other input settings\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>\u003Cstrong>Switch layouts:\u003C\u002Fstrong> Press \u003Ccode>Alt + Shift\u003C\u002Fcode>\u003C\u002Fp>\u003Cp>\u003Cstrong>Alternative switching options:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ccode>grp:win_space_toggle\u003C\u002Fcode> - Super + Space\u003C\u002Fli>\u003Cli>\u003Ccode>grp:caps_toggle\u003C\u002Fcode> - CapsLock toggles\u003C\u002Fli>\u003Cli>\u003Ccode>grp:ctrl_shift_toggle\u003C\u002Fcode> - Ctrl + Shift\u003C\u002Fli>\u003C\u002Ful>\u003Cp>\u003Cstrong>Apply changes:\u003C\u002Fstrong> Press \u003Ccode>Super + Esc\u003C\u002Fcode> to reload Hyprland\u003C\u002Fp>\u003Ch2>Package Management Tips\u003C\u002Fh2>\u003Cp>Omarchy uses a combination of package managers:\u003C\u002Fp>\u003Ch3>pacman (Official Arch packages)\u003C\u002Fh3>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Update system\nsudo pacman -Syu\n\n# Install package\nsudo pacman -S package-name\n\n# Remove package\nsudo pacman -R package-name\n\n# Remove with dependencies\nsudo pacman -Rs package-name\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>yay (AUR helper)\u003C\u002Fh3>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Search AUR\nyay -Ss search-term\n\n# Install from AUR\nyay -S package-name\n\n# Update AUR packages\nyay -Syu\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Omarchy Menu\u003C\u002Fh3>\u003Cp>The GUI way: \u003Ccode>Super + Alt + Space\u003C\u002Fcode> gives you access to:\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>Install\u003C\u002Fstrong> → Package (Arch repos)\u003C\u002Fli>\u003Cli>\u003Cstrong>Install\u003C\u002Fstrong> → AUR (AUR packages)\u003C\u002Fli>\u003Cli>\u003Cstrong>Install\u003C\u002Fstrong> → Editor, Style, Font (curated lists)\u003C\u002Fli>\u003Cli>\u003Cstrong>Remove\u003C\u002Fstrong> → Uninstall applications\u003C\u002Fli>\u003Cli>\u003Cstrong>Update\u003C\u002Fstrong> → System updates\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Troubleshooting Common Issues\u003C\u002Fh2>\u003Ch3>Permission Errors with npm\u003C\u002Fh3>\u003Cp>Never use \u003Ccode>sudo npm install -g\u003C\u002Fcode>. Instead, configure npm properly:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">mkdir ~\u002F.npm-global\nnpm config set prefix '~\u002F.npm-global'\necho 'export PATH=~\u002F.npm-global\u002Fbin:$PATH' &gt;&gt; ~\u002F.zshrc\nsource ~\u002F.zshrc\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Shell Won't Change\u003C\u002Fh3>\u003Cp>If \u003Ccode>chsh\u003C\u002Fcode> doesn't work, use the workaround:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">echo 'exec zsh' &gt;&gt; ~\u002F.bashrc\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Missing Text Editors\u003C\u002Fh3>\u003Cp>Omarchy ships with Neovim but not nano. Install if needed:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo pacman -S nano\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>Bluetooth Keyboard at Boot\u003C\u002Fh3>\u003Cp>Full-disk encryption requires a wired keyboard (or 2.4GHz wireless) to enter the password at startup. Bluetooth keyboards won't work at this stage.\u003C\u002Fp>\u003Ch2>Essential Omarchy Workflows\u003C\u002Fh2>\u003Ch3>File Management\u003C\u002Fh3>\u003Cp>Omarchy includes a file manager accessible through the app launcher (\u003Ccode>Super + Space\u003C\u002Fcode>).\u003C\u002Fp>\u003Ch3>Screenshots\u003C\u002Fh3>\u003Cp>Use the built-in screenshot tools via hotkeys or the Omarchy Menu.\u003C\u002Fp>\u003Ch3>Installing Fonts\u003C\u002Fh3>\u003Cp>\u003Ccode>Super + Alt + Space\u003C\u002Fcode> → Install → Style → Font\u003C\u002Fp>\u003Ch3>Theming\u003C\u002Fh3>\u003Cp>Omarchy comes beautifully themed, but you can customize: \u003Ccode>Super + Alt + Space\u003C\u002Fcode> → Style\u003C\u002Fp>\u003Ch2>Keeping Your System Updated\u003C\u002Fh2>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\"># Update Omarchy-specific packages\nomarchy-update-git\n\n# Update all system packages\nsudo pacman -Syu\n\n# Update AUR packages\nyay -Syu\n\n# Or use the Omarchy Menu\n# Super + Alt + Space → Update\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Useful Resources\u003C\u002Fh2>\u003Cul>\u003Cli>\u003Cstrong>Official Documentation:\u003C\u002Fstrong> https:\u002F\u002Flearn.omacom.io\u002F\u003C\u002Fli>\u003Cli>\u003Cstrong>Omarchy Website:\u003C\u002Fstrong> https:\u002F\u002Fomarchy.org\u002F\u003C\u002Fli>\u003Cli>\u003Cstrong>Community Discord:\u003C\u002Fstrong> Join for installation help and troubleshooting\u003C\u002Fli>\u003Cli>\u003Cstrong>GitHub Repository:\u003C\u002Fstrong> https:\u002F\u002Fgithub.com\u002Fbasecamp\u002Fomarchy\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Conclusion\u003C\u002Fh2>\u003Cp>Omarchy provides an excellent foundation for a modern development environment. With the tools and configurations covered in this guide, you'll have a fully customized system tailored to your workflow. The keyboard-driven approach may take some getting used to, but once mastered, it significantly boosts productivity.\u003C\u002Fp>\u003Cp>Remember: everything in Omarchy is designed to be accessible via keyboard shortcuts. Learn the basics (\u003Ccode>Super + Space\u003C\u002Fcode>, \u003Ccode>Super + Alt + Space\u003C\u002Fcode>, \u003Ccode>Super + Return\u003C\u002Fcode>), and you'll quickly discover why so many developers are making the switch.\u003C\u002Fp>\u003Cp>Happy coding on Omarchy!\u003C\u002Fp>\u003Chr>\u003Cp>\u003Cem>Have questions or suggestions for this guide? Feel free to reach out or leave a comment below.\u003C\u002Fem>\u003C\u002Fp>","getting-omarchy-running-what-i-actually-did","2025-10-07T07:05:45+00:00",[100,103,106,107,108],{"id":101,"name":102,"slug":102},"b9e7e7c2-917e-4693-b3d5-0db82a7d80ca","omarchy",{"id":104,"name":105,"slug":105},"7867e670-e264-4621-b4fc-cc0a8c4b15dd","operating system",{"id":47,"name":48,"slug":48},{"id":50,"name":51,"slug":51},{"id":109,"name":110,"slug":110},"901060fc-ba5a-4f54-9ea5-3a5fbf19e565","the alternative",{"id":112,"title":113,"teaser":114,"body":115,"slug":116,"date":117,"tags":118},"9e45ac11-9347-4cf2-a257-7b8df9d1f344","Setting Up GitLab Subdomain Redirects with SSL","This guide shows how to configure multiple subdomains for your GitLab instance, with automatic redirects and SSL certificates. You will end up with both gitlab.example.com and git.example.com…","\u003Cp>This guide shows how to configure multiple subdomains for your GitLab instance, with automatic redirects and SSL certificates. You will end up with both \u003Ccode>gitlab.example.com\u003C\u002Fcode> and \u003Ccode>git.example.com\u003C\u002Fcode> working, with the latter redirecting to the former.\u003C\u002Fp>\u003Ch2>Prerequisites\u003C\u002Fh2>\u003Cul>\u003Cli>GitLab Omnibus installation\u003C\u002Fli>\u003Cli>Two DNS records pointing to your server (gitlab.example.com and git.example.com)\u003C\u002Fli>\u003Cli>Certbot installed for SSL certificates\u003C\u002Fli>\u003Cli>Root access to your server\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Step 1: Get SSL Certificates\u003C\u002Fh2>\u003Cp>First, obtain certificates for both domains:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\"># Stop GitLab nginx temporarily\nsudo gitlab-ctl stop nginx\n\n# Get certificates for both domains\nsudo certbot certonly --standalone -d gitlab.example.com -d git.example.com\n\n# Start nginx back up\nsudo gitlab-ctl start nginx\u003C\u002Fcode>\u003C\u002Fpre>\u003Cdiv class=\"note\">\u003Cstrong>Rate Limits:\u003C\u002Fstrong> Let's Encrypt has rate limits. If you hit them, wait for the reset time shown in the error message.\u003C\u002Fdiv>\u003Ch2>Step 2: Configure GitLab\u003C\u002Fh2>\u003Cp>Edit your GitLab configuration file:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">sudo nano \u002Fetc\u002Fgitlab\u002Fgitlab.rb\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Add this configuration:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">external_url 'https:\u002F\u002Fgitlab.example.com'\nletsencrypt['enable'] = false  # Using manual certs\nnginx['ssl_certificate'] = \"\u002Fetc\u002Fletsencrypt\u002Flive\u002Fgitlab.example.com\u002Ffullchain.pem\"\nnginx['ssl_certificate_key'] = \"\u002Fetc\u002Fletsencrypt\u002Flive\u002Fgitlab.example.com\u002Fprivkey.pem\"\nnginx['redirect_http_to_https'] = true\n\n# HTTPS redirect for git.example.com\nnginx['custom_nginx_config'] = \"\nserver {\n  listen 443 ssl;\n  server_name git.example.com;\n  ssl_certificate \u002Fetc\u002Fletsencrypt\u002Flive\u002Fgitlab.example.com\u002Ffullchain.pem;\n  ssl_certificate_key \u002Fetc\u002Fletsencrypt\u002Flive\u002Fgitlab.example.com\u002Fprivkey.pem;\n  return 301 https:\u002F\u002Fgitlab.example.com\\$request_uri;\n}\n\nserver {\n  listen 80;\n  server_name git.example.com;\n  return 301 https:\u002F\u002Fgitlab.example.com\\$request_uri;\n}\n\"\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Step 3: Apply Configuration\u003C\u002Fh2>\u003Cpre>\u003Ccode class=\"language-plaintext\"># Reconfigure GitLab\nsudo gitlab-ctl reconfigure\n\n# Check status\nsudo gitlab-ctl status\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Step 4: Set Up Auto-Renewal\u003C\u002Fh2>\u003Cp>Configure certificate auto-renewal to work with GitLab:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\"># Edit renewal configuration\nsudo nano \u002Fetc\u002Fletsencrypt\u002Frenewal\u002Fgitlab.example.com.conf\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Add these lines at the bottom:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">pre_hook = gitlab-ctl stop nginx\npost_hook = gitlab-ctl start nginx\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Add renewal to crontab:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">sudo crontab -e\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Add this line:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">0 12 * * * \u002Fusr\u002Fbin\u002Fcertbot renew --quiet\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Step 5: Test Everything\u003C\u002Fh2>\u003Cp>Verify your setup works:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\"># Test HTTP redirects to HTTPS\ncurl -I http:\u002F\u002Fgitlab.example.com\ncurl -I http:\u002F\u002Fgit.example.com\n\n# Test HTTPS works\ncurl -I https:\u002F\u002Fgitlab.example.com\ncurl -I https:\u002F\u002Fgit.example.com\n\n# Test certificate renewal\nsudo certbot renew --dry-run\u003C\u002Fcode>\u003C\u002Fpre>\u003Cdiv class=\"warning\">\u003Cp>\u003Cstrong>Expected Results:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>HTTP requests → 301 redirect to HTTPS\u003C\u002Fli>\u003Cli>git.example.com → 301 redirect to gitlab.example.com\u003C\u002Fli>\u003Cli>gitlab.example.com → 302 redirect to login page\u003C\u002Fli>\u003Cli>Renewal test should succeed\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fdiv>\u003Ch2>What You Get\u003C\u002Fh2>\u003Cul>\u003Cli>\u003Cstrong>Flexibility:\u003C\u002Fstrong> Users can access GitLab using either domain name\u003C\u002Fli>\u003Cli>\u003Cstrong>Security:\u003C\u002Fstrong> All traffic automatically redirected to HTTPS\u003C\u002Fli>\u003Cli>\u003Cstrong>Consistency:\u003C\u002Fstrong> Everything ends up at your primary domain\u003C\u002Fli>\u003Cli>\u003Cstrong>Automation:\u003C\u002Fstrong> Certificates renew automatically\u003C\u002Fli>\u003C\u002Ful>\u003Cp>Both \u003Ccode>gitlab.example.com\u003C\u002Fcode> and \u003Ccode>git.example.com\u003C\u002Fcode> now work seamlessly, with the shorter git subdomain redirecting to your main GitLab instance. Perfect for accommodating different user preferences while maintaining a single primary URL.\u003C\u002Fp>\u003Cfooter style=\"border-top:1px solid #eee;color:#666;font-size:0.9em;margin-top:40px;padding-top:20px;\">\u003Cp>Remember to replace \u003Ccode>example.com\u003C\u002Fcode> with your actual domain throughout the configuration.\u003C\u002Fp>\u003C\u002Ffooter>","setting-gitlab-subdomain-redirects-ssl","2025-09-23T22:55:19+00:00",[],{"id":120,"title":121,"teaser":122,"body":123,"slug":124,"date":125,"tags":126},"e48987ee-54e1-4216-a08f-f8e525814e86","Why I spend many evenings on South African History Online (SAHO)","Look, most people finish work and that's it. Me? I close my laptop at Novicell here in Aarhus and open it again for SAHO after spending a good time with my family. Been doing this for a good few…","\u003Cp>Look, most people finish work and that's it. Me? I close my laptop at Novicell here in Aarhus and open it again for SAHO after spending a good time with my family. Been doing this for a good few months in order to catch up on technical debt accrued over the years and on upgrades done while I had disconnected from the project.\u003C\u002Fp>\u003Cdiv>\u003Cdiv class=\"grid-cols-1 grid gap-2.5 [&amp;_&gt;_*]:min-w-0 !gap-3.5\">\u003Ch2 class=\"text-xl font-bold text-text-100 mt-1 -mb-0.5\">The Cape Town connection\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">I lived in Cape Town for 12 years. Not as a tourist - proper lived there. The parties, the nature, Mitchells Plain, the trains, the taxi ranks, the girls, everything. Shot thousands of documentary photographs, curated exhibitions and met people I will always consider family. Got published in National Geographic and The Economist. But that's not what this is about.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">What got me was the stories. Every corner had a particular South African story, and here's the thing - it shares a tie to my own Danish history through slavery. Denmark, for all its social democracy and good causes, has its own slavery past. I don't carry guilt - I carry responsibility. I can actively be anti-racist, tell a people's history, make sure information flows freely from the perspective of those who lived it, not those who wrote the official version.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">A people's history. That's what our CEO Omar Badsha always said until it became SAHO's official slogan. History from the majority's perspective - the element of socialism that means we are because of others. That's Ubuntu right there. The same core meaning.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">And SAHO? They've been collecting these stories, archive items and research for 25 years. No ads. Never had them, never will.\u003C\u002Fp>\u003Ch2>What we're actually doing\u003C\u002Fh2>\u003Cp>Phoenix and I took on the Drupal 8 to 11 migration. Sounds boring? It's not when you realize we lost dates on 3500+ This Day In History articles. That's thousands of stories that need fixing. Real South African history that matters to real people.\u003C\u002Fp>\u003Cp>The stats don't lie - 683K clicks, 51.5M impressions, 92% growth. We went from a site that got traffic but no engagement to one where people actually interact. The Group Areas Act page alone gets 29.6K clicks. The Apartheid Legislation pages - 13.5K and 11.7K. People want this information.\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002FScreenshot%202025-09-05%20174537.png\" data-entity-uuid=\"0fc3697c-4bcf-45ff-a911-835453d5e650\" data-entity-type=\"file\" alt=\"Google traffic and impressions for August 2025\" width=\"871\" height=\"598\" loading=\"lazy\">\u003Ch2>The technical bits (for those who care)\u003C\u002Fh2>\u003Cp class=\"whitespace-normal break-words\">Yeah, it's database optimisation and CMS migration. But here's what we actually inherited - the project wasn't even using Composer. Manual dependency management and Filezilla deployments. I'm not joking.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">We lifted this whole thing into a proper DevOps setup. GitHub workflows, new infrastructure, Apache Solr for search, Redis for caching, latest PHP. From cowboy coding to actual modern development.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">But every query I optimize means faster access to history. Every bug I fix means someone finds their grandfather's story, understands their neighborhood's past, connects with their country's journey.\u003C\u002Fp>\u003Cp class=\"whitespace-normal break-words\">The dates thing bugs me. We know there are thousands more articles waiting to be properly dated and catalogued. It's not just technical debt but also the proper insight to the data that matters.&nbsp;\u003C\u002Fp>\u003Ch2>Why evenings in Denmark matter to South Africa\u003C\u002Fh2>\u003Cp>I could be watching Netflix. Instead, I'm fixing code for a project 10,000km away. Why? Because I walked those streets. I know what it means when a kid in Khayelitsha can read about their history on their phone. When a researcher in Jo'burg doesn't hit a paywall. When someone googles their township and finds actual history, not just crime stats.\u003C\u002Fp>\u003Cp>Phoenix keeps me honest - I ask her opinion like 5 times minimum on everything. Sorry Phoenix! But her feedback helped me transform this Drupal 8 site into something that actually works and on latest Drupal version too. The modern style, the user engagement - that's her influence right there.\u003C\u002Fp>\u003Ch2>Here's the thing\u003C\u002Fh2>\u003Cp>SAHO needs help. Not just money (though yes, we're working on funding solutions that don't involve selling out). We need people. Editorial team needs reigniting. Admin team needs support.\u003C\u002Fp>\u003Cp>But also - we need YOU. Got opinions? Found a typo? Want to add something? It's all on GitHub: https:\u002F\u002Fgithub.com\u002FSouth-African-History-Online\u002Fsahistory-web\u002Fissues\u002Fnew\u002Fchoose\u003C\u002Fp>\u003Cp>Open a profile. Tell us what sucks. Fix what's broken. Add what's missing. This isn't some closed academic project - it's open source history.\u003C\u002Fp>\u003Ch2>The South African part\u003C\u002Fh2>\u003Cp>South Africa is unique. I'm not South African, but I felt it every day I was there. The way people navigate their shared past, the complexity, the resilience. That vision of unity despite everything - it's real and it's worth preserving digitally.\u003C\u002Fp>\u003Cp>Every evening I spend debugging is an evening ensuring those stories survive. The June 16 Soweto Youth Uprising page that gets 9.7K clicks? That's kids learning their history. The apartheid legislation pages? That's people understanding how we got here.\u003C\u002Fp>\u003Ch2>So what\u003C\u002Fh2>\u003Cp>I'm a Jutland developer who fell in love with South African stories. Now I make sure those stories stay online, stay free, stay accessible. No corporate BS, no paywalls, just history available to anyone with an internet connection.\u003C\u002Fp>\u003Cp>Want to help? Jump on GitHub. Can't code? Report bugs, suggest edits, tell us what's missing. This is bigger than my evening coding sessions. It's about keeping 25 years of collected history alive and growing.\u003C\u002Fp>\u003Cp>South Africa taught me that technology should serve people's need to remember. So that's what I do. Every evening. One bug fix, one optimization, one feature at a time.\u003C\u002Fp>\u003Cp>Still no ads. And it will stay that way.\u003C\u002Fp>\u003Chr>\u003Cp>\u003Cem>Jump in: https:\u002F\u002Fgithub.com\u002FSouth-African-History-Online\u002Fsahistory-web\u002Fissues\u002Fnew\u002Fchoose\u003C\u002Fem>\u003C\u002Fp>\u003Cp>\u003Cem>Visit SAHO: sahistory.org.za\u003C\u002Fem>\u003C\u002Fp>\u003C\u002Fdiv>\u003C\u002Fdiv>","why-i-spend-many-evenings-south-african-history-online-saho","2025-09-05T16:06:55+00:00",[],{"id":128,"title":129,"teaser":130,"body":131,"slug":132,"date":133,"tags":134},"7f2c7c8b-d554-451d-8129-bd7b8bd47118","Why I said goodbye to PayPal - and other American providers","This weekend, I shared a small update about my homelab and gaming setup - a calm note on keeping things running, from Portainer to Prometheus. But I ended that post with a quiet goodbye to PayPal. A…","\u003Cp>This weekend, I shared a small update about my homelab and gaming setup - a calm note on keeping things running, from Portainer to Prometheus. But I ended that post with a quiet goodbye to PayPal. A few people reached out asking why.\u003C\u002Fp>\u003Cp>Here is the longer version.\u003C\u002Fp>\u003Cp>Over the years, I have relied on American tools and platforms for everything from payments to infrastructure. Like many others, I admired the innovation - early PayPal, GitHub, Google, AWS. The U.S. positioned itself as a symbol of freedom, open internet, and privacy rights (at least in theory). It was the kind of digital landscape you wanted to build in. But something has changed.\u003C\u002Fp>\u003Cp>This is not about tech anymore. It is about direction.\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002F482301478_10160431107276933_2938127187606077293_n.jpg\" data-entity-uuid=\"49d69a57-89ea-4847-9f12-ed2a38a22ca9\" data-entity-type=\"file\" width=\"649\" height=\"800\" loading=\"lazy\">\u003Cp>Elon Musk has turned once-promising platforms into playgrounds for misinformation and ego. Trump, back in the spotlight, is pushing rhetoric that openly undermines democracy, inclusion, and global cooperation. And all the while, companies once associated with innovation are doubling down on policies that are anything but humane. PayPal specifically has been at the centre of several stories involving unjust account freezes, lack of transparency, and heavy-handed control over users' funds. That is not a service I want to support anymore.\u003C\u002Fp>\u003Cp>So yes. It is personal. It is political. And it is practical.\u003C\u002Fp>\u003Cp>I have started cutting back on American service providers - not because I think Europe or anyone else is perfect, but because I want to align my digital life with my values. Less centralisation. More privacy. More open source. More sovereignty over my own data.\u003C\u002Fp>\u003Cp>Goodbye PayPal. Hello alternatives.\u003C\u002Fp>\u003Cp>Self-hosted whenever I can. European where it makes sense. Open source by default.\u003C\u002Fp>\u003Cp>And yeah - still gaming on weekends.\u003C\u002Fp>","why-i-said-goodbye-paypal-and-other-american-providers","2025-03-23T14:57:11+00:00",[],{"id":136,"title":137,"teaser":138,"body":139,"slug":140,"date":141,"tags":142},"02aaed5e-9ac2-469a-bb59-131cc2a4872f","The foundation for public digitalisation is open source","The Foundation for Public DigitalisationIt is essential to understand the customer's needs and incorporate continuous maintenance and economic considerations into operations, while reusing solutions…","\u003Cdiv>\u003Ch2>The Foundation for Public Digitalisation\u003C\u002Fh2>\u003Cp>It is essential to understand the customer's needs and incorporate continuous maintenance and economic considerations into operations, while reusing solutions rather than developing new ones each time.\u003C\u002Fp>\u003Cp>This contributes to a trustworthy and productive collaboration where all parties benefit fully.\u003C\u002Fp>\u003Ch2>Open Source\u003C\u002Fh2>\u003Cp>By Mads Nørgaard, Tech Lead – Drupal, Novicell\u003C\u002Fp>\u003Ch3>Experience and Engagement in the Open Source Community\u003C\u002Fh3>\u003Cp>Looking back over the past six years of my involvement in Denmark's open source community, one thing is clear: Open source is not just a technological solution, but an overarching strategy that enables innovation, promotes economic responsibility, and supports the interests of the community at a national level.\u003C\u002Fp>\u003Cp>It is a shared understanding that we can achieve much more together than alone.\u003C\u002Fp>\u003Cp>For us, open source is not just a technical choice, but a strategic decision for each client, offering value through innovative, secure, and scalable solutions that meet both the client's needs and those of the broader digital community. The approach varies from client to client, depending on their existing systems, strategy, and total cost of ownership (TCO) considerations. It should be noted that the choice doesn't always fall on an open-source platform.\u003C\u002Fp>\u003Cp>At Novicell, we actively contribute to the open-source community. We fix bugs upstream during development, and we are active contributors to platforms like Drupal.org. This means that when you choose us as a client, you're not just getting a solution tailored to your needs, but one that is continuously improved and secured by a global network of developers. A concrete example is our development of the Transform API, which not only met specific client needs but also helped improve standards within MACH architecture and headless systems. The module was originally developed for a private client, and now it is used free of charge by clients like OS2udoglær. Improvements made during the development of the new OS2udoglær can also benefit the original client. We see this as a win-win, driving progress for everyone while supporting a community-oriented approach where quality and security are always in focus.\u003C\u002Fp>\u003Ch3>The Human Factor in Technology\u003C\u002Fh3>\u003Cp>I have been involved in both the operations and development of OS2udoglær, and over the years, I have seen how dedicated individuals like Mie Bjerrisgaard Frydensbjerg can be the driving force behind a project's success. For example, Jes Strickertson in Skive Municipality has played a central role in ensuring that OS2datascanner delivers real value to the municipality, while Pernille Thorsen and Anders Sølbech have been crucial in pushing OS2forms forward. Similar stories from other municipalities confirm that technology is only as good as the people who implement and further develop it. It requires dedication and collaboration from the right individuals to fully realise the potential of IT solutions.\u003C\u002Fp>\u003Cp>When we base a large part of our business on open source, it’s not just because it’s a smart technological solution, but also because it makes good business sense. However, business based on open source comes with some unwritten rules, where reciprocity and responsibility are key. Companies cannot take big risks just because the technology is open; they must be supported in being able to operate and further develop solutions together in a sustainable manner. If suppliers simply exploit open source without giving back, it can harm the entire ecosystem and undermine the technologies we rely on to build our open-source solutions.\u003C\u002Fp>\u003Ch3>Collaboration and Innovation as Driving Forces\u003C\u002Fh3>\u003Cp>Collaboration and innovation are also key words in the open-source world. As Troels Dahl Ranum from Aarhus Municipality emphasised in a previous opinion piece, open source is about collaborating across organisations to achieve shared goals. Our experiences with the multi-supplier collaboration on OS2forms have shown how cooperation can increase quality and simultaneously set expectations for ordering new developments. Our ongoing involvement in the OS2 open-source community, particularly through projects like OS2forms, is not just a technical contribution but also involves sparring on processes and a deep understanding of public sector business requirements.\u003C\u002Fp>\u003Cp>Being a good supplier in a multi-supplier collaboration requires open communication, cooperation, and technical expertise, supported by economic responsibility. Transparency and an understanding of shared goals are crucial for creating value, while proactive communication and flexibility ensure the project stays on track. It is essential to understand the customer's needs and incorporate continuous maintenance and economic considerations into operations, while reusing solutions rather than developing new ones each time. This contributes to a trustworthy and productive collaboration where all parties benefit fully.\u003C\u002Fp>\u003Cp>Unfortunately, we see that some of the larger players in the industry, who claim to support open source and the OS2 collaboration, often leave a very limited footprint in the community. It doesn't take more than opening the code on GitHub and writing good documentation, yet many still do not contribute sufficiently.\u003C\u002Fp>\u003Cp>At Novicell, we are mindful of this balance and ensure that we actively contribute to the projects we use. This is not just an ethical obligation but also a necessity to maintain the technologies that our business depends on.\u003Cbr>\u003Cbr>---------------------------\u003C\u002Fp>\u003Cp>The article was originally written in Danish and later translated to English for purpose of sharing with my larger English speaking Network.\u003C\u002Fp>\u003C\u002Fdiv>","foundation-public-digitalisation-open-source","2024-10-16T08:53:08+00:00",[],{"id":144,"title":145,"teaser":146,"body":147,"slug":148,"date":149,"tags":150},"ce863b19-d421-4613-b1d4-4a7a81c12e5c","Fixing audio issues after Ubuntu 24.04 upgrade on Lenovo T14s","Upgrading your system can be exciting, but it sometimes brings along unexpected challenges. Recently, after upgrading my Lenovo T14s from Ubuntu 22.04 to 24.04, I faced a frustrating issue—my audio…","\u003Cp>Upgrading your system can be exciting, but it sometimes brings along unexpected challenges. Recently, after upgrading my Lenovo T14s from Ubuntu 22.04 to 24.04, I faced a frustrating issue—my audio completely stopped working. Both the internal speakers and any connected headsets weren’t being detected, leaving me with only a \"Dummy Output\" for audio.\u003C\u002Fp>\u003Ch2>The Problem:\u003C\u002Fh2>\u003Cp>After upgrading, I noticed that no audio devices were showing up in my sound settings. The only available option was \"Dummy Output,\" which is the system’s way of telling you that it can’t find any real audio hardware. This was a clear sign that something had gone wrong with the audio drivers.\u003C\u002Fp>\u003Cp>A quick check of the system logs revealed the following error:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">sof-audio-pci-intel-tgl 0000:00:1f.3: error: request firmware intel\u002Fsof\u002Fsof-tgl.ri failed err: -2\nsof-audio-pci-intel-tgl 0000:00:1f.3: you may need to download the firmware from https:\u002F\u002Fgithub.com\u002Fthesofproject\u002Fsof-bin\u002F\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>It turns out that my system was missing the required Intel Sound Open Firmware (SOF) files needed to run the audio DSP on my hardware. Without these files, the audio subsystem couldn't load properly.\u003C\u002Fp>\u003Ch2>The Fix:\u003C\u002Fh2>\u003Cp>After some research, I discovered that I needed to manually install the missing firmware and DSP topology files for my system to work correctly. Here's the step-by-step guide I followed to fix the issue:\u003C\u002Fp>\u003Ch3>1. Download the Missing Firmware:\u003C\u002Fh3>\u003Cp>I went to the \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthesofproject\u002Fsof-bin\u002Freleases\" target=\"_blank\">SOF GitHub Repository\u003C\u002Fa> and downloaded the latest firmware package.\u003C\u002Fp>\u003Ch3>2. Copy Firmware and Topology Files:\u003C\u002Fh3>\u003Cp>Once downloaded, I located the necessary firmware and topology files, specifically \u003Ccode>sof-tgl.ri\u003C\u002Fcode> and \u003Ccode>sof-hda-generic-2ch.tplg\u003C\u002Fcode>. Using the following commands, I copied them to the correct directories:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">sudo cp sof-bin-2024.06\u002Fsof\u002Fsof-tgl.ri \u002Flib\u002Ffirmware\u002Fintel\u002Fsof\u002F\nsudo cp sof-bin-2024.06\u002Fsof-tplg\u002Fsof-hda-generic-2ch.tplg \u002Flib\u002Ffirmware\u002Fintel\u002Fsof-tplg\u002F\n\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>3. Update Initramfs:\u003C\u002Fh3>\u003Cp>Next, I updated the initramfs to ensure that the new firmware files were loaded during boot:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">sudo update-initramfs -u\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>4. Reboot the System:\u003C\u002Fh3>\u003Cp>After a reboot, I checked the system logs again to make sure everything was loading correctly:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext\">dmesg | grep sof\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>I was relieved to see that the firmware had been successfully loaded, and the missing audio devices were finally detected!\u003C\u002Fp>\u003Ch2>Lessons Learned:\u003C\u002Fh2>\u003Cp>Upgrading your system can bring new features and improvements, but it can also break existing setups. In this case, understanding the error logs and knowing where to get the missing firmware made all the difference. It’s a reminder of the complexity of hardware drivers on Linux, especially when it comes to newer hardware like Intel’s Sound Open Firmware.\u003C\u002Fp>\u003Cp>If you ever run into this issue, don’t panic! The solution is out there, and with a bit of patience, you can get your system back to working order.\u003C\u002Fp>","fixing-audio-issues-after-ubuntu-2404-upgrade-lenovo-t14s","2024-09-23T06:35:44+00:00",[151,154,155,158,161,164,167,170,173],{"id":152,"name":153,"slug":153},"08351221-3a10-4921-94e9-094d111cde57","Ubuntu",{"id":50,"name":51,"slug":51},{"id":156,"name":157,"slug":157},"46b3ea70-e70d-4729-b7d7-54cab6fb3a5c","Audio",{"id":159,"name":160,"slug":160},"43452d1f-77a7-4251-b4b8-07f5abd3f8ab","SOF",{"id":162,"name":163,"slug":163},"4c1769d7-5764-4b67-a0a0-3bb405ca4db7","Intel",{"id":165,"name":166,"slug":166},"6b8a9317-a7bf-4890-a34b-373a86e71eaf","Lenovo T14s",{"id":168,"name":169,"slug":169},"ce04fadc-1215-44f6-a7ec-3744d20641cb","Sound Drivers",{"id":171,"name":172,"slug":172},"7c7425a6-67ba-41ac-981c-341a1911dc96","Firmware Issues",{"id":174,"name":175,"slug":175},"43b720d5-44a7-41db-b176-4dce6612de5e","Troubleshooting",{"id":177,"title":178,"teaser":179,"body":180,"slug":181,"date":182,"tags":183},"434f6027-3439-4ee6-9060-3de4a0d00ad1","Troubleshooting Port 80 Conflicts with Apache on your newly installed WSL","The problemYou have run into a situation where something is already using port 80 on your system, and it turns out Apache is the culprit. This is one of the most common stumbling blocks for…","\u003Ch2>The problem\u003C\u002Fh2>\u003Cp>You have run into a situation where something is already using port 80 on your system, and it turns out Apache is the culprit. This is one of the most common stumbling blocks for developers setting up local servers - especially if you have just installed WSL and did not expect Apache to be there at all.\u003C\u002Fp>\u003Cp>Port 80 is the default port for HTTP traffic, and most local development tools want it. DDEV, Lando, custom Docker setups - they all reach for port 80. When Apache is already sitting on it, nothing else can bind to it, and you get errors that are not always obvious about the root cause.\u003C\u002Fp>\u003Ch2>Why is this happening?\u003C\u002Fh2>\u003Cp>The output you are seeing indicates that the Apache web server is currently listening on port 80. This typically happens when Apache was installed as part of a package bundle or as a dependency of something else, and it started automatically. On Ubuntu and Debian-based systems, Apache binds to port 80 by default the moment it is installed - no configuration needed, no permission asked.\u003C\u002Fp>\u003Cp>You can verify this yourself by running:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo lsof -i :80\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>If Apache is the process holding the port, you will see apache2 in the output. Now you know exactly what you are dealing with.\u003C\u002Fp>\u003Ch2>How to free up port 80\u003C\u002Fh2>\u003Cp>There are several ways to handle this, depending on whether you need Apache at all and how permanent you want the fix to be.\u003C\u002Fp>\u003Ch3>1. Stop the Apache service\u003C\u002Fh3>\u003Cp>If you do not need Apache running right now but might want it later, you can stop it temporarily. This frees up port 80 immediately but only until the next reboot or until something starts Apache again.\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo service apache2 stop\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Verify the port is free afterwards:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo lsof -i :80\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>If the output is empty, you are good to go.\u003C\u002Fp>\u003Ch3>2. Disable Apache from starting on boot\u003C\u002Fh3>\u003Cp>If you rarely need Apache and prefer to start it manually on the odd occasion you do, disable it from the boot sequence. This is the option I recommend for most developers who have moved to container-based workflows.\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo systemctl disable apache2\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>This prevents Apache from launching at startup, meaning port 80 stays free unless you explicitly start Apache yourself. You can always re-enable it later with:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo systemctl enable apache2\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch3>3. Remove Apache entirely\u003C\u002Fh3>\u003Cp>If you have determined that you do not need Apache on your system at all, remove it. This is the clean option - no lingering services, no surprise port conflicts down the line.\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo apt-get remove apache2\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>If you want to remove Apache along with its configuration files, use purge instead:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo apt-get purge apache2\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Before doing this, make sure no other applications or services on your machine depend on Apache. If you are running WordPress locally through Apache, for instance, you will need an alternative in place first.\u003C\u002Fp>\u003Ch3>4. Change Apache's listening port\u003C\u002Fh3>\u003Cp>There is a middle ground that most guides do not mention. If you need Apache for some projects but also need port 80 free for other tools, you can change which port Apache listens on.\u003C\u002Fp>\u003Cp>Open the ports configuration file:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo nano \u002Fetc\u002Fapache2\u002Fports.conf\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Find the line that reads \u003Ccode>Listen 80\u003C\u002Fcode> and change it to another port - 8080 is the common choice:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-apache\">Listen 8080\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>You will also need to update any virtual host files in \u003Ccode>\u002Fetc\u002Fapache2\u002Fsites-enabled\u002F\u003C\u002Fcode> that reference port 80. Then restart Apache:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo service apache2 restart\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Now Apache runs on 8080 and port 80 is free for whatever else you need.\u003C\u002Fp>\u003Ch3>5. Restart Apache\u003C\u002Fh3>\u003Cp>Sometimes you do not need to stop or remove anything - you just need to reset Apache after a configuration change or if it is misbehaving.\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo service apache2 restart\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>This does not free up port 80. It restarts Apache on the same port. Use this when you have edited Apache configuration files and need the changes to take effect.\u003C\u002Fp>\u003Ch3>6. Check and update your WSL version\u003C\u002Fh3>\u003Cp>If you are running your development environment on WSL, the port conflict might not be Apache's fault at all. Older or pre-release versions of WSL can cause unexpected behaviour with port forwarding between Windows and the Linux subsystem.\u003C\u002Fp>\u003Cp>Check your current WSL version:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">wsl --version\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>If the version is outdated or you are on a pre-release build, update it:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">wsl --update\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>I would recommend staying on the stable release unless you have a specific reason to run pre-release. If you do need the latest fixes that have not hit stable yet:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">wsl --update --pre-release\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>It is also worth checking whether something on the Windows side is holding port 80. Hyper-V, IIS, or even Skype have been known to grab it. From PowerShell on the Windows side:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-powershell\">netstat -ano | findstr :80\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Which option should you choose?\u003C\u002Fh2>\u003Cp>It depends on your setup. If you use DDEV or another container-based tool for all your local development, just disable Apache from starting on boot and forget about it. That is what I do on my own machines. If you still have projects that rely on Apache directly, change its listening port to 8080 so it coexists with your other tools. If you are certain you will never need Apache, remove it entirely and keep things clean.\u003C\u002Fp>\u003Ch2>The bigger picture\u003C\u002Fh2>\u003Cp>This kind of port conflict is exactly why container-based development tools like DDEV exist. Instead of managing Apache, MySQL, PHP versions, and port assignments manually on your host system, you let containers handle all of it in isolation. Each project gets its own environment, its own ports, its own configuration - and nothing conflicts with anything else.\u003C\u002Fp>\u003Cp>If you are still running a traditional LAMP stack on your local machine for Drupal or WordPress development, consider making the switch. I wrote about setting up DDEV in another post, and it eliminates this entire category of problems.\u003C\u002Fp>\u003Cp>But for now - stop Apache, free the port, and get back to work.\u003C\u002Fp>","troubleshooting-port-80-conflicts-apache-your-newly-installed-wsl","2024-08-23T17:50:34+00:00",[],{"id":185,"title":186,"teaser":187,"body":188,"slug":189,"date":190,"tags":191},"f8be43d9-bf58-4999-84f0-3af6125f7ec9","Level up your dev game with VSCode on WSL","So, you’ve got your shiny Windows 11 setup with WSL, and you’re loving the Linux vibes on your gaming rig. But now you need a proper editor that’s as powerful as it is flexible. Enter Visual Studio…","\u003Cp>So, you’ve got your shiny Windows 11 setup with WSL, and you’re loving the Linux vibes on your gaming rig. But now you need a proper editor that’s as powerful as it is flexible. Enter Visual Studio Code, the editor that’ll make you forget about those bulky IDEs. Let’s get it installed and ready to rock your WSL environment.\u003C\u002Fp>\u003Ch2>Step 1: Install VSCode on Windows\u003C\u002Fh2>\u003Cp>First things first, get VSCode on your Windows system. Head over to \u003Ca href=\"https:\u002F\u002Fcode.visualstudio.com\" target=\"_blank\">code.visualstudio.com\u003C\u002Fa> and grab the Windows installer. Don’t forget to check the \"Add to PATH\" box during installation - future you will thank you when you’re launching VSCode from the terminal.\u003C\u002Fp>\u003Ch2>Step 2: Marry VSCode with WSL\u003C\u002Fh2>\u003Cp>VSCode has this sweet extension called Remote - WSL that lets you work seamlessly between Windows and Linux. Just pop into the Extensions view (\u003Ckbd>Ctrl\u003C\u002Fkbd>+\u003Ckbd>Shift\u003C\u002Fkbd>+\u003Ckbd>X\u003C\u002Fkbd>), search for it, and hit install. Boom, you’re connected!\u003C\u002Fp>\u003Ch2>Step 3: Installing VSCode Inside WSL (because why not?)\u003C\u002Fh2>\u003Cp>Open up your Ubuntu terminal in WSL and run the following commands:\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>Update your package list (because that’s just good hygiene):\u003C\u002Fstrong>\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo apt update\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>\u003Cstrong>Install dependencies to keep things running smoothly:\u003C\u002Fstrong>\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo apt install -y curl gpg software-properties-common apt-transport-https\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>\u003Cstrong>Add the Microsoft repository (yes, we’re living in the future where this is a thing):\u003C\u002Fstrong>\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">curl https:\u002F\u002Fpackages.microsoft.com\u002Fkeys\u002Fmicrosoft.asc | gpg --dearmor &gt; microsoft.gpg\nsudo install -o root -g root -m 644 microsoft.gpg \u002Fusr\u002Fshare\u002Fkeyrings\u002F\nsudo sh -c 'echo \"deb [arch=amd64 signed-by=\u002Fusr\u002Fshare\u002Fkeyrings\u002Fmicrosoft.gpg] https:\u002F\u002Fpackages.microsoft.com\u002Frepos\u002Fvscode stable main\" &gt; \u002Fetc\u002Fapt\u002Fsources.list.d\u002Fvscode.list'\nsudo apt update\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>\u003Cstrong>Install VSCode (you’re almost there!):\u003C\u002Fstrong>\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">sudo apt install code\u003C\u002Fcode>\u003C\u002Fpre>\u003Ch2>Step 4: Fire Up VSCode in Your Project\u003C\u002Fh2>\u003Cp>Navigate to your project in WSL and launch VSCode directly from the terminal with:\u003C\u002Fp>\u003Cpre>\u003Ccode class=\"language-plaintext language-bash\">cd ~\u002Fddev-projects\u002Fyour-drupal-project\ncode .\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>VSCode will pop open, ready to be your coding sidekick. Install any recommended extensions to supercharge your setup.\u003C\u002Fp>\u003Ch2>Step 5: Get Cozy with the Integrated Terminal\u003C\u002Fh2>\u003Cp>Say goodbye to context switching—just open the terminal inside VSCode (\u003Ckbd>Ctrl\u003C\u002Fkbd>+\u003Ckbd>`\u003C\u002Fkbd>) and keep your focus where it belongs: on your code.\u003C\u002Fp>\u003Ch2>Outro:\u003C\u002Fh2>\u003Cp>And there you have it! VSCode on WSL is your new development playground, offering the best of both worlds. Whether you’re tweaking your latest Drupal project or diving into a new stack, you’re now equipped with a powerhouse editor. Happy coding, and may your terminal always be bug-free!\u003C\u002Fp>","level-your-dev-game-vscode-wsl","2024-08-17T16:03:58+00:00",[],{"id":193,"title":194,"teaser":195,"body":196,"slug":197,"date":198,"tags":199},"d3615723-6ca1-4372-888f-ea0727cdf696","Setting up WSL on Windows 11 for Drupal 11 development","Coming from an Ubuntu Linux background, I’m all about that streamlined development environment, but let's be real - sometimes you need to game, listen to music, and hop on a few meetings without…","\u003Cp>Coming from an Ubuntu Linux background, I’m all about that streamlined development environment, but let's be real - sometimes you need to game, listen to music, and hop on a few meetings without juggling multiple devices. Enter the gaming PC: a powerhouse that lets me keep work and play separate. But don’t worry, Linux still has a seat at the table.\u003C\u002Fp>\u003Cp>When I switched to Windows 11, I wanted to keep my Linux efficiency while also diving into the sweet world of gaming. WSL (Windows Subsystem for Linux) was the perfect bridge between these two worlds, letting me enjoy my Linux workflows without giving up the joys of PC gaming.\u003C\u002Fp>\u003Cp>WSL allows me to run a full Ubuntu environment right inside Windows. This means I can use all the tools I loved on Linux, minus the mess of WAMP or XAMPP - those dinosaurs should be extinct by now. With WSL, I can run my entire Drupal stack natively, with Linux commands and scripts, just like the good old days on Ubuntu. No need to deal with the ghosts of local development past!\u003C\u002Fp>\u003Cp>To make life even smoother, I set up a shortcut to launch WSL without needing admin rights:\u003C\u002Fp>\u003Col>\u003Cli>\u003Cstrong>Create the shortcut:\u003C\u002Fstrong>\u003Cul>\u003Cli>Right-click on the desktop, select \u003Cstrong>New &gt; Shortcut\u003C\u002Fstrong>.\u003C\u002Fli>\u003Cli>Set the location to \u003Ccode>C:\\Windows\\System32\\wsl.exe\u003C\u002Fcode>.\u003C\u002Fli>\u003Cli>Name the shortcut (something cool like \"WSL Terminal\") and click \u003Cstrong>Finish\u003C\u002Fstrong>.\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fli>\u003Cli>\u003Cstrong>Assign a keyboard shortcut:\u003C\u002Fstrong>\u003Cul>\u003Cli>Right-click the shortcut, go to \u003Cstrong>Properties\u003C\u002Fstrong>.\u003C\u002Fli>\u003Cli>In the \u003Cstrong>Shortcut\u003C\u002Fstrong> tab, click the \u003Cstrong>Shortcut key\u003C\u002Fstrong> field and press \u003Ccode>Ctrl + Alt + T\u003C\u002Fcode>.\u003C\u002Fli>\u003Cli>Click \u003Cstrong>Apply\u003C\u002Fstrong>, then \u003Cstrong>OK\u003C\u002Fstrong>.\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fli>\u003C\u002Fol>\u003Cp>Now, with a quick keyboard shortcut, I can dive straight into my WSL environment, keeping development fast and efficient. It’s like having the best of both worlds: the power and familiarity of Linux for development and the gaming prowess of Windows 11 for, well, everything else.\u003C\u002Fp>\u003Cp>No more fighting with WAMP or XAMPP—just a clean, effective, and powerful way to develop Drupal locally.\u003C\u002Fp>\u003Cp>&nbsp;\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002F55%2BHilarious-developer-memes-that-will-leave-you-in-splits-49.jpg\" data-entity-uuid=\"6752a843-c09d-4fe3-9c42-f8d5ea7efa58\" data-entity-type=\"file\" width=\"504\" height=\"475\" class=\"align-center\" loading=\"lazy\">\u003Cp>\u003Cbr>\u003Cstrong>Quick Start: Setting Up Ubuntu on WSL for Drupal Development\u003C\u002Fstrong>\u003C\u002Fp>\u003Ch2>Quick Start: Setting Up Drupal with DDEV on WSL\u003C\u002Fh2>\u003Cp>Say goodbye to those clunky local setups - here’s how to set up your Drupal development environment using DDEV on Ubuntu within WSL. It’s the modern way to do things, leaving WAMP, XAMPP, and their outdated friends in the dust.\u003C\u002Fp>\u003Col>\u003Cli>\u003Cp>\u003Cstrong>Install WSL and Ubuntu 22.04 LTS:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>Open PowerShell as admin and run:\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext\">wsl --install\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>This installs WSL along with the default Ubuntu distribution. If you want the latest Ubuntu 22.04 LTS, just ensure it’s updated after installation. Restart your machine if prompted.\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Set up Docker and DDEV:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>Launch Ubuntu from the Start menu or by typing \u003Ccode>wsl\u003C\u002Fcode> in your terminal.\u003C\u002Fli>\u003Cli>Install Docker by following the official \u003Ca href=\"https:\u002F\u002Fdocs.docker.com\u002Fengine\u002Finstall\u002Fubuntu\u002F\">Docker on Ubuntu guide\u003C\u002Fa>. Make sure Docker is up and running within WSL.\u003C\u002Fli>\u003Cli>Install DDEV using the official script:\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext\">sudo apt-get install -y bash-completion apt-transport-https ca-certificates software-properties-common\ncurl -L https:\u002F\u002Fraw.githubusercontent.com\u002Fdrud\u002Fddev\u002Fmaster\u002Fscripts\u002Finstall_ddev.sh | bash\u003C\u002Fcode>\u003C\u002Fpre>\u003C\u002Fli>\u003Cli>\u003Cp>\u003Cstrong>Create your Drupal project:\u003C\u002Fstrong>\u003C\u002Fp>\u003Cul>\u003Cli>Navigate to your project directory:\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext\">cd ~\u002Fprojects\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>Set up a new Drupal project using DDEV:\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext\">ddev config --project-type=drupal9 --docroot=web --create-docroot\nddev start\u003C\u002Fcode>\u003C\u002Fpre>\u003Cul>\u003Cli>Install Drupal using Drush:\u003C\u002Fli>\u003C\u002Ful>\u003Cpre>\u003Ccode class=\"language-plaintext\">ddev drush site:install\u003C\u002Fcode>\u003C\u002Fpre>\u003C\u002Fli>\u003Cli>\u003Cstrong>Access your Drupal site:\u003C\u002Fstrong>\u003Cul>\u003Cli>Once DDEV has started your project, access your Drupal site by navigating to \u003Ccode>https:\u002F\u002F&lt;projectname&gt;.ddev.site\u003C\u002Fcode> in your web browser.\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fli>\u003Cli>\u003Cstrong>Forget the clunky old stacks:\u003C\u002Fstrong>\u003Cul>\u003Cli>With DDEV on WSL, you’re running a modern, containerized development environment that’s fast, flexible, and easy to manage. No more fighting with outdated local setups!\u003C\u002Fli>\u003C\u002Ful>\u003C\u002Fli>\u003C\u002Fol>\u003Cp>Now you're ready to develop Drupal like a pro, using the best tools for the job. DDEV and WSL together offer a powerful, efficient setup that leaves those old local development methods in the dust. Happy gaming and coding!\u003C\u002Fp>","setting-wsl-windows-11-drupal-11-development","2024-08-17T14:58:37+00:00",[],{"id":201,"title":202,"teaser":203,"body":204,"slug":205,"date":206,"tags":207},"15ed1b18-7449-4623-bf6d-48a9e7ce274e"," This is a Drupal 11 development project","This is a Drupal 11 site used for development purposes and keeping up to date with latest trends and a chance to really display to a potential client what we can do with Drupal.I will use this setup…","\u003Cp>This is a Drupal 11 site used for development purposes and keeping up to date with latest trends and a chance to really display to a potential client what we can do with Drupal.\u003C\u002Fp>\u003Cp>I will use this setup to do live coding sessions and other fun things to further get myself and my colleagues active within the Drupal community.\u003C\u002Fp>\u003Cp>This site is running per this Github repo at present: \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Fdrupal11_docker_composer_drush\u002F\">https:\u002F\u002Fgithub.com\u002Fmadsnorgaard\u002Fdrupal11_docker_composer_drush\u002F\u003C\u002Fa>\u003Cbr>&nbsp;\u003C\u002Fp>\u003Cimg src=\"\u002Fsites\u002Fdefault\u002Ffiles\u002Finline-images\u002FIMG-20240306-WA0025.jpg\" data-entity-uuid=\"7702025f-d12d-461e-a25b-1b02a19243ce\" data-entity-type=\"file\" width=\"351\" height=\"347\" loading=\"lazy\">","drupal-11-development-project","2024-08-12T19:11:42+00:00",[],0,1,20]