27 Oct

nfsen + debian + apache = d’oh

I was re-doing one of my lab monitoring tools, a VM that hosted too many sparse and poorly maintained pieces of software. Now re-homing each bit onto its own VM (partially for sanity) – I ended up re-installing the excellent NFSen (a netflow monitoring tool/frontend for nfdump).

The software includes a directory named ‘icons’ in the web root, which doesn’t seem insane to me. What is insane, however, is Apache’s decision (by default!) to include an alias for a folder named ‘icons’ in the root. That means that without knowing it, the NFSen icons folder was being redirected to /usr/share/apache2/…/ whatever. That caused a headache.

To find this out, I ran:
cd /etc/apache2
grep -iR /usr/share *

This told me about the dang alias file, /etc/apache2/mods-available/alias.conf

I went into that file, commented out this dumb default, reset apache and now it’s away laughing.

14 Oct

QoS for your Linux Plex box

FireQos in action

FireQos in action

When Jim Salter¬†posted about FireQos the other day, it made me take note. FireQos is a unix’y firewall ‘for humans’. In my day job, QoS is a complex and multi-faceted thing, requiring tonnes of design, thought and understanding to implement correctly (not to mention hardware). It has dramatic effects on network traffic when set up correctly, but that usually means end-to-end config across a domain, so marking at one end of the network translates to actions all the way through. That’s a bit much for home.

I was interested, as I had a problem. Behind my TV I have an Intel NUC, with a little i3 processor and 802.11n wifi. I use it to torrent things, run a Plex server and be a multi-purpose Linux machine for my own needs the rest of the time. (OwnCloud is still running on the Raspberry Pi 3, mind you). When I was pulling down a delicious new Debian image at 12MB/s, and trying to watch something on Plex (via the PS4), things got a bit choppy. Try to VNC into the box from my laptop to throttle the torrent was always annoying, it could take minutes for the screen to refresh if a very hearty download was going on. Like most nerds, the slightest delay caused by my own setup was slowly tearing me apart.

This is where FireQos comes in. With a very simple install and a couple of minutes of settings out of the way, the performance improved dramatically. All I did was prioritise the traffic for Plex, SSH, VNC and browsing over torrents/anything else – and like magic, everything works smoothly altogether – with no throttling on the torrent client.

Remember before where I said QoS really needs to be end-to-end in the network to make a difference? In this case, not true. By simply tweaking the Linux handling of packets, things have gotten much better with the rest of the network unaware anything is happening. Obviously, this would improve if I had a router that was also participating in the fun, but I don’t.. Yet. At the moment, if another device tries to use the network when a full torrent storm is going on, it’s toast.

Anyhow, check out the FireQos tutorial here, and give it a crack yourself. There’s basically no risk, go nuts.

Here’s my fireqos.conf file, so you can copypasta it if you like.

24 Aug

Best Windows TFTP Client

This is mostly a note to self.

3CDaemon was always my Windows TFTP server of choice, but finding a valid .exe that works on Windows 7 64bit and isn’t riddled with viruses is a problem when you’re in a hurry. Ph Jounin has written the lovely Tftpd64. You can get it from the main site here (and in 32bit flavours if you like). I’ll upload the zip here too, incase it ever goes offline.

I don’t mind if you never want to trust a download from a random blog, but for my own stress levels, now I have it in the cloud ūüôā

tftpd64.452

26 Jul

Upgrading Junos issue – not enough space

Quick note, mostly for my own reference down the track.

I have in the lab a MPC3E-NG FQ and a MIC3-100G-DWDM card. To those of you not Juniper ensconced, that’s a chassis-wide slot and a 100Gbit/s DWDM (tunable optic) card. Wowee, I hear you say. Anyway, the 100G DWDM card requires a fairly spicy new version of Junos, one with a new underlying FreeBSD core at the heart. My lab was running on Junos 14.1R5.5, an already pretty recent version – but for 100G across my lab I need to use the DWDM card, inconjunction with some Infinera DWDM kit for good measure.

Normally, a Junos upgrade is fairly painless. In this case however, I was getting errors that I couldn’t understand. Here is how I pushed past them.

When going from 14.x to 15.1F6, Juniper don’t recommend running validation of the software release against your config.

When I ran this on one RE (RE0) it went through just fine. On RE1 however, I got this:

Installing package ‘/var/tmp/junos-install-mx-x86-64-15.1F6.9.tgz’ …
Verified manifest signed by PackageProductionEc_2016
Verified manifest signed by PackageProductionRSA_2016
Verified manifest signed by PackageProductionRSA_2016
Verified contents.iso
Verified issu-indb.tgz
Verified junos-x86-64.tgz
Verified kernel
Verified metatags
Verified package.xml
Verified pkgtools.tgz
camcontrol: not found
camcontrol: not found
camcontrol: not found
Verified manifest signed by PackageProductionEc_2016
Saving the config files …
NOTICE: uncommitted changes have been saved in /var/db/config/juniper.conf.pre-install
tar: contents/jpfe-wrlinux.tgz: Wrote only 0 of 10240 bytes
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: contents/jplatform-ex92xx.tgz: Wrote only 0 of 10240 bytes

(truncated)

tar: Skipping to next header
tar: Error exit delayed from previous errors
ERROR: junos-install fails post-install

This looks a bit like something has gone wrong, but it’s not immediately obvious what it is. I am using a .tgz of Junos 15 that is sitting on my RE1’s /var partition, and it has 2.1 GB hard-drive space free (show system storage)… Queue a few hours of headscratching.

Turns out, my assumption on how much space is actually required when upgrading from Junos 14 to 15 was completely under. I started with about 2GB free, but when I was finally successful I had 11GB free. Once the install was complete, I was down to 7.7G, which means the installation process uses up 3.3GB all by itself. I guess that’s not crazy, as the tarball is 1.9GB to begin with, but the error output didn’t make it clear enough for me, a stupid man.

Here is how I overcame the obvious and got Junos 15.1F6 installed ūüôā

1: Jump into shell

2: Become root

3: Find all the files that are > 500MB. We’re looking for things in /var

4: Delete any of the old .tgz files from previous releases. If you want to keep them, SCP them off first.

5: Check you now have > 3.5 GB free. Or go all out like me and have 11GB free, whatever.

6: Upgrade Junos and get back to work!

19 Jul

WordPress on a VPS.. Ugh.

As part of my dumb journey to self-host things (well, on a VPS I pay for), I fired up Apache and chucked a virtual host on it. The plan will be to move all my hosted sites to this server, but for now I’m starting with a fresh WordPress install for a family member’s B&B website.

In the mix we have:

  • Apache2
  • Ubuntu 14.04 LTS 64bit
  • php5
  • MySQL
  • phpmyadmin
  • WordPress latest
  • Sweat coming out of my face

So the basic LAMP install was done without a hitch. Where WordPress started to suck was permissions. By default, somehow, I installed WordPress with the permissions set really way wide open (777 on directories).. I used some commands in the WordPress codex (inside my /var/www/sitename/public_html folder)..

james@chappie:~$ sudo find . -type f -exec chmod 664 {} +
james@chappie:~$ sudo find . -type d -exec chmod 775 {} +
james@chappie:~$ chmod 400 wp-config.php

So, there’s that. Then I found a user couldn’t upload a theme or plugin, because for some reason (despite www-data, the Ubuntu Apache user, having group rights to the files..) WordPress couldn’t write to wp-content/, the folder these things go in by default.

If I ran ‘su www-data touch check’ inside wp-content, the file “check” was created, so it wasn’t a permissions issue as far as I could tell. Weird. I ended up fixing it by explicitly telling WordPress to allow ‘direct’ uploading by adding:

define(‘FS_METHOD’, ‘direct’);

To the wp-config.php file.. All of a sudden.. It works.

I had read elsewhere (a 3 year old mysteriously written comment on WordPress’ own community site) that changing PHP’s config to use FastCGI/php-fpm was the solution.. I am still (and probably always will be) a noob with these web technologies, so I wasn’t really 100% sure what I was doing. I also failed, lucky ‘FS_METHOD’ being set to direct was what I wanted.

Update:

I found that changing the ‘server api’ of PHP5 to¬†FPM/FastCGI (and removing the aformentioned FS_METHOD) also works. I followed the steps listed here (JDawg’s answer):¬†http://askubuntu.com/a/527227/521633

19 Jul

Running your own VPS kind of sucks

I pay a good chunk of change to Dreamhost every year, I have done since I was old enough to have a credit card. It’s handy. They are a pretty chill hosting provider, by and large. I host about 8 or 9 WordPress sites. Some are mine, some for friends. As I get better at unix’y stuff, I’m becoming more of a cheapskate and am thinking that $140NZ or whatever Dreamhost costs per year is a bit steep for a shared hosting platform. I have trialled their more dedicated VPS offering and while it kills performance-wise, it’s too expensive for someone hosting sites on behalf of others for free…

A while back I got a good deal on Zappie Host, based in New Zealand, where most of my ‘clients’ are also based. It’s a dedicated VPS, 2 CPU cores and a gig of RAM for about $7NZ a month. I am also on a 50% off deal for the first year, so it’s a good deal. I’m paying for it in parallel with DH so I can get my shit together and migrate. Part of the motivation for sorting it all out before my next DH payment is due is financial, as I’m paying for 2 hosting solutions and not getting much benefit from the combination.

Some challenges:

  • Picking a web server (Apache or NGINX?!)
  • Picking a database (MariaDB or MySQL!?)
  • To use a cPanel or not?
  • Security
  • Mail (jeesus christ, email is complex!)
  • Virtualisation/separation of different client sites
  • Backups!?
  • Remote access for users
  • Unknown unknowns
  • DNS
  • Goddamn email
  • Resource monitoring

I think I need a coffee. What I want to do is use this list and knock out some posts detailing the crap you need to put up with to save a few bucks and call yourself a server administrator ūüôā

Time for you and time for me,
And time yet for a hundred indecisions,
And for a hundred visions and revisions,
Before the taking of a toast and tea.

– T.S. Eliot (lol)

01 Jul

Firing up NetBox

Over at Packet Life, stretch has been talking about his/Digital Ocean’s cool new IPAM/DCIM. It’s open source, being developed like crazy and has some interesting features (for me, the ability to document my lab setup which is getting out of hand seems like a good place to start).

I ran through the installer on Github and squirted the install onto a fresh Ubuntu 14.04 LTS server on AWS. Here are my thoughts…

  • The install instructions were very well written for a fresh OSS product, pretty much nothing¬†failed the first time through. This is reminiscent of other Digital Ocean documentation I have read. I did need to jump out of editing a config file to generate a key, but that’s minor.
  • From an absolute beginner with Linux pov (not quite the boat I’m in, but a boat I was fairly recently in) – there are a few things missing (like when to sudo/be root or not per command). It’s not a noobs setup guide, but it is pretty easy to follow otherwise.
  • The installation/setup takes around 10-15 mins including generating the AWS EC2 host
  • The interface of NetBox looks lovely and clean

That was pretty easy and quite smooth to install. Now it’s up and running I can’t wait to document my lab and post the results up here.

 

29 Apr

Owncloud 9.0.1 on Raspberry Pi 3 – Step by Step

Why?!

I bought a Raspberry Pi 3 on the day it was announced, because I am easily excitable. When it arrived, I tried out a few things like compiling Synergy (much faster than a RPI2!) and the oblig. Quake 3. Once the fun wore off, I thought this might be a good time to finally sort out my cloud storage issues. The issues are as follows:

1) I am mildly concerned about having my data live on someone else’s computer
2) I really like and rely on Dropbox, but my 8GB isn’t enough anymore
3) I am a cheapskate

The solution for this is to self-host a ‘cloud’ storage system. While that’s a bit of a paradox, having a system (with apps!) that can have my files on me wherever I go and upload pictures I take on my phone automatically is too handy to give up – and too risky to have no real control over. The best open-source (free, see point 3 above) solution I’ve found so far is OwnCloud.

Note: If you want to do an OwnCloud install following this post – it doesn’t need to be on a Raspberry Pi 3 – you can do it on pretty much any Debian/Ubuntu server. One day I will move this whole thing to a proper server, but again, see point 3.

Note 2: There are hundreds/thousands/millions of ways to do this task. I am basing this whole thing¬†on¬†Dave Young’s very well written howto on Element14. In fact, you can probably follow that right now and skip my post – I am writing this down for my own benefit and there are a *few* changes in OwnCloud 9.0.1

What?
OK, so what do you need to get this up and running?

  1. A Raspberry Pi 3 Model B –¬†buy one here¬†
  2. A MicroSD card (Class 10 is speedy, you can get a 200GB one for $80USD at the moment, which is NUTS).
  3. The .img file for Raspbian OS. I suggest using Raspbian Jesse Lite
  4. An internet connection. Any one will do, but a decent 30/10Mbit/s is probably recommended.
  5. A static IP and a domain name (or a dynamic DNS service such as the one offered at system-ns.net)
  6. A keyboard and an HDMI capable monitor or screen. This is just for setting up the Pi
  7. Some basic Unix shell skills.. Although you can just follow along and hopefully I’ll spell everything out
  8. A router/firewall in your house that you can forward ports on

Optional extras:

  1. An external USB HDD (for more space)
  2. A smartphone, for the OwnCloud App

If you have the 8¬†things above (we’ll cover step 4 in some detail later) – then we’re good to start.

Steps to success

Section 1 – Setting up the Raspberry Pi 3

This section you can skip if you already have a freshly installed Raspberry Pi 3 running Raspbian.

  1. First up, install Raspbian OS on the SD card. Steps to do this for your main PC OS are here.
  2. When you have booted into your newly installed Pi, log in with username pi and password raspberry. You’ll change this soon. I suggest you do this bit with the Pi plugged into a TV or screen, with a keyboard. We’ll SSH in and control the Pi from another machine later on.
  3. Run the raspi-config tool – you can set your hostname, expand the file system (use all of your SD card, not just a couple of GB)


    rpi-confIn there, I suggest you run Option 1 and 2. Inside Option 9, I set the Memory Split to 4MB (this is mostly a headless install, why waste RAM on a GPU that won’t get much use), and enable SSH. I changed the hostname to ‘cloud’, pick a name you like. Finish up, then reboot the Pi

  4. Find your IP address. I am using the Ethernet interface (physically plugged into a switch), but you could use WiFi if you wanted (Raspberry Pi 3 has in-built WiFi, google will show you how to set it up!)


    Here is a command to show the IP DHCP has given you (Raspbian uses DHCP by default). I could set this manually by editing /etc/network/interfaces and changing eth0 inet DHCP to inet static, but I won’t be doing that. I’ll be assigning a static DHCP lease in my router config to keep my Pi on 192.168.0.13 for good. ANYWAY – my IP is found, so I can SSH in from my main computer and live an easier life.
  5. Log in to your Pi from a terminal. iTerm2 for OSX, Putty for Windows, uhh, Terminal from *nix. I use password-less entry as a security feature and because I’m lazy – if you want a hand setting that up let me know – otherwise, you’re in and can run the following commands:

  6. That chunky block will install some key pieces of software for us
    1. NGINX – the web server we’ll be using
    2. PHP5 – PHP, bane of all existence
    3. Lots of PHP bits and pieces, PHP5-apuc is the memory cache we’ll use, for example
    4. git – to allow us to grab a nice SSL certificate from Let’s Encrypt

Section 2 – Configuring and installing other bits

  1. Let’s get an SSL cert, so browsing to our Cloud will show a nice green lock in browser, and the OwnCloud app won’t complain *too much*. Of course, this also helps to keep the contents of the cloud machine nice and secure. I use Let’s Encrypt for free signed certificates – something you couldn’t even dream of 5 years ago. As there isn’t a packaged Let’s Encrypt installer for ARM7 Debian at the moment, we’ll use git to grab one:


    This will grab Let’s Encrypt (the software) and chuck it in a folder in the pi user’s home directory called letsencrypt, then move us in there. It took about 10 minutes on my Pi 3 and reasonable internet connection, your results may vary.

    When that’s installed, we need to break out and get a DNS name (and a dynamic one, at that, as my ISP doesn’t offer static IP addressing)…

  2. Head to System NS, sign up and create a Dynamic DNS name. For the rest of this bit, I’ll refer to my domain as cloudy.system-ns.net. Next thing we need to do is automate the IP<>DNS mapping, as my ISP might pull the rug out and change my allocated IPv4 address at any time. System NS¬†has a guide to do this for your OS (makes sense to do it on the Raspberry Pi, though!) – which can be found here.Basically, you create a file on the Raspberry Pi which tells the Pi to download the System NS page responsible for tying your current public IP to the chosen DNS name. Then you schedule it with crontab to run every 5 minutes. This means in the worst case, you will be a DNS orphan for around 5 minutes (System NS TTL for A records seems to be very short, which helps).
  3. OK – so now we have a domain, let’s get back to sorting out a SSL cert for it. As this Raspberry Pi will be used solely for OwnCloud (as far as the webserver side of things goes) I will generate the certificate for the /var/www/owncloud directory:

    This will go away and pull a cert down from Let’s Encrypt. In fact, it will pull down a public certificate and a private key. They will be (by default) lurking in¬†/etc/letsencrypt/live/cloudy.system-ns.net (you need to be root to look in there, which you can do with ‘sudo su’. Type ‘exit’ to get back to the pi user when you’re done gawking at the nice new certs.

  4. So, that’s some housekeeping done. Next, I’ll steal wholesale from Dave, and give a modified version of his NGINX config (remember, that’s the web server we installed back in¬†section 1).Let’s edit the nginx config file! (Actually, let’s delete everything in it and start again)!

    This will open nano, the wusses text editor. I love it, because I can remember how to use it in a hurry. Anyway, delete everything in there (ctrl-k deletes a whole line at a time in nano).  Then edit this, and throw it in:

    You can copy mine and replace all the bits with cloudy, 192.168.0.13 etc..

  5. Now we can change the PHP settings to allow for larger uploads of files. Dave suggests 2GB, so we’ll go with that. In practice¬†you can set it so whatever your filesystem allows. I’ll combo in a couple of steps here (changing where php-fqm listens as well)

    I didn’t follow all of these exactly, so I’ll copy-pasta Dave’s advice and leave you to try it out.

  6. Reboot the Pi!

  7. Now you need to install OwnCloud (hey, finally!). The way to do it on a Raspberry Pi 3 running Raspbian is simple.
    1. Grab the tarball from here

    2. Extract the files

    3. Now you end up with a nice folder called ‘owncloud’. We want to stick that where NGINX is looking for websites, so we’ll move it to /var/www, change the ownership of the folder to belong to the www-data user and delete all the evidence.

    4. Dave now recommends you edit a couple of files in the /var/www/owncloud folder to tweak the filesizes:

    5. Do a final ‘sudo reboot’
    6. Browse to your 192.168.0.13 (whatever you called it) and set up OwnCloud. I have mine set to use the SD card for storage, if you wish to use an external HDD, consult Dave’s excellent post.
  8. When you first load OwnCloud it might complain about a few things, they can usually be solved by installing a memcache, tightening up file/folder permissions or tweaking other security features. As you have a valid SSL cert, you can probably get going pretty well straight away. I’ve added the necessary tweaks to the NGINX config that should get you past most of the pains, at least under Owncloud 9.0.1.
  9. That should be it! I lied, above, though. I use both the SD card and a backup onto an external HDD. This gives me the ability to live without my external HDD if I need it for anything temporarily, and keep 2 copies locally of all my stuff. To do it, I use a nice program called rsync and a little script + crontab to schedule it to run. rsync has a million uses, but one is copying files from one place to another in a really smart way, using deltas (i.e. it only copies the files that have changed). This way, I can schedule it to run every 6 hours, and it only takes as long as transfering the new or changed files to run.
    1. Install rsync!

    2. Create a script to run the sync..

    3. Now every 6 hours from midnight that will run, doing an rsync and adding a date-stamped line to the file oc-backup.log. My 1TB external drive is permanently mounted to /media/shamu, by the way.

So, that’s it. Up and running. Mine has been working for 2 weeks now, approx 128MB of RAM free during normal operation (I check a little too often). php-fqm eats the CPU for breakfast when doing things like generating image previews in the app (I use the Android one and it’s great).. But mostly it all works. In fact, today I deleted my Dropbox app and all the files on it – I’m eating my own dogfood here, trusting this setup for better or worse. If you use it, let me know!

27 Apr

How to migrate a Raspberry Pi SD card to a bigger SD Card

Snazzy title, James.

I recently bought a 200GB SD card from Amazon for what I consider to be a completely crazy price (around $80USD). My Raspberry Pi 3 is running OwnCloud (howto post coming soon) and the 32GB card currently serving as the OS’ home is a little on the small side. It’s also quite an old card, so I’m worried it will one day up and die.

Here’s a quick step-by-step on how to move your SD card to a bigger one, using a Mac with OSX El Capitan (or basically any Unix based computer with an SD card reader).

Step 0: Backup anything important. Things can go wrong any time you mess with this stuff, so caution.

Step 1: Shutdown your Pi (pull the plug out)

Step 2: Take the SD card you want to copy out of the pi, and stick it in your Mac (with an adaptor). It will mount automatically, so run Disk Utility and unmount the ‘boot’ partition, which is what Raspbian OS calls it by default. Once the partitions are greyed out, they’re not mounted anymore.

Screen Shot 2016-04-26 at 2.28.51 pm

Step 3: Check you have space. You will need as much free hard-drive space on your Mac as equals the size of the SD card, even if it’s not full. Mine is 32GB, so I need 32GB hard drive space free. Sadly, I don’t have that much free on my tiny SSD, so I will be using an external USB 3 drive with ample space.

Step 4: Find the BSD name of your SD card reader/card. You do this by opening a terminal and entering:

Yeah which will spit out a list of your connected media. We’re looking for something that matches the 32GB size of the SD Card (yours might be 8GB, 16GB… whatever)

Here we see ‘disk2’ under Identifier, which is what we want.

Step 5: Find out where the file copy of the SD card should go. I want to stick it on my external HDD called SUGAR ROSS (for some reason), so I need to find where it’s mounted.

There I show you that the 1TB external drive I have connected is located at /Volumes/Sugar\ Ross/ . The extra \ I have in there is an escape character, which helps Unix based Operating Systems handle things like spaces in folder names, which it didn’t historically like.

Step 6: Copy that floppy. We’re now going to use one of the oldest Unix commands there is – dd. Once again, in the terminal, tell the computer to make a direct byte-for-byte copy of the SD card and stick it in a file called sdcard.img (or whatever you fancy).

This will take a long while. It’s pretty oldschool and takes its job very seriously. You can tell what’s going on behind the scenes by pressing control-t every now and then. So far, mine has written 23GB in 1226 seconds, and still running. Won’t be much longer now, have a cup of tea. (I will not be doing this for my 200GB card.. Hopefully).

Step 7: Check your image. Ok, 27 minutes elapsed time later, and dd tells me it has finished. As I never believe anything my computer tells me, I want to check the output file.

Using ‘ls -la’ shows me all the files on SUGAR ROSS, and I can see my nice .img file there at just shy of 32GB.

Step 8: Format wars. The SD card I am replacing my Pi’s with is 200GB, and I’m told that gargantuan SD cards have a funny filesystem that’s not compatible with the Pi. As I am writing a physical copy of the existing card onto the new one, I don’t think I’ll have a problem. If you do, I suggest formatting it to FAT32 and going from there. The dd process (this time in reverse) we just ran will take care of everything, as we didn’t copy a partition of the SD card, we copied the whole thing.

Step 9: Prepare¬†the new card. Stick the new (bigger) SD Card into the Mac’s card reader.¬†Use diskutil list again to find the BSD name of the disk, and then unmount any partitions that automatically mounted.

Step 10: Write to disk. This time, we use dd in reverse (and wait a lot longer).

Hint: make sure you unmount, not eject the SD card partition on the card you want to write out to. Disk Utility is good for this.

Step 11: Fire it up. Now that we’ve spent 11,000 seconds writing the 32GB image back to the new card, unmount it (if you succeed with dd in step 10, it should auto mount on OSX). Place the new card into the Pi and power it up. With any luck, it will fire up as it did before. Note, you’ll still only see the original filesize, for now..

(See how it still says 29G? That’s a good indication my old SD card (and all the partitions on it) were copied over to the 200G card.

Step 12: Expand the filesystem. Run the following command to stretch out the filesystem to the full 200G.

You can then select ‘Expand filesystem’ from the menu and reboot..

Step 13: Revel in your success.

Here we see 181GiB of usable space (roughly 200GB). Hooray!

02 Mar

Regex for private-ASN on Junos

Hi, super quick note to share something I’ve made to match 2 Byte Private ASN numbers (64512-65535) in Junos. You can apply it to match a community, or in an AS-PATH.

^((6451[2-9]|645[2-9][0-9]|64[6-9][0-9][0-9]|65[0-4][0-9][0-9]|655[0-2][0-9]|6553[0-5]).*)$

I will try and expand it to catch “0”, and also 4 Byte Private ASN, but I can’t burn time on that at the moment.

This one catches “0” (also an invalid community), but doesn’t work in Juniper as the \D toggle doesn’t compute…

^((6451[2-9]|645[2-9][0-9]|64[6-9][0-9][0-9]|65[0-4][0-9][0-9]|655[0-2][0-9]|6553[0-5]).*)|(\D0:).*$