Debian Lighttpd does infinite redirect loop and fails to connect

Just imagine your running a blog that requires zero maintenance and one day you access it and it doesn’t load!

You try Firefox and then Chrome and finally Edge (the new IE)

You notice that Firefox and Chrome seem to loop and then finally fail - You notice that Edge works….

You notice that cURL works.

Things are but aren’t working.

Finally you notice Firefox is trying to do TLS1.3! Interesting how do I disable that on Debian 9 with Lighttpd? You Can’t!

What’s the fix?

in lighttpd.conf in your SSL section input:

ssl.disable-client-renegotiation = “disable”

ssl.disable-client-renegotiation exists because of a bug back in 2009 - This bug has long been patch in newer versions of OpenSSL and is safe to turn back on.

Disabling this setting allowed you to find the answer to your troubles :-)

Squid HTTPS interception and filtering without client certificates

I had a requirement to filter (all) web traffic on a few servers. This is typically easy with Squid and using it’s transparent proxy function. Where it gets difficult is filtering domains for HTTPS traffic.
I don’t want to SSL intercept the traffic, I don’t want to install CA certificates on the clients, I only want to filter the URLs based on a whitelist to which it can access. This is how it is done:

yum install squid
# I used squid 3.5.20

/usr/lib64/squid/ssl_crtd -c -s /var/lib/ssl_db
chown -R squid.squid /var/lib/ssl_db

mkdir /etc/squid/ssl_cert/
chown -R squid.squid /etc/squid/ssl_cert/
cd /etc/squid/ssl_cert
openssl req -new -newkey rsa:1024 -days 1365 -nodes -x509 -keyout myca.pem -out myca.pem

echo "www.google.com" > /etc/squid/whitelist
chmod 640 /etc/squid/whitelist
chown root:squid /etc/squid/whitelist

/etc/squid/squid.conf:

acl localnet src 10.0.0.0/8	# RFC1918 possible internal network
acl localnet src 127.0.0.1/32	# RFC1918 possible internal network
acl localnet src 172.16.0.0/12	# RFC1918 possible internal network
acl localnet src 192.168.0.0/16	# RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http
acl CONNECT method CONNECT

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager

acl step1 at_step SslBump1
acl whitelist_ssl ssl::server_name "/etc/squid/whitelist"
acl whitelist dstdomain "/etc/squid/whitelist"
acl port_80 port 80
acl http proto http

ssl_bump peek step1
ssl_bump splice whitelist_ssl
ssl_bump terminate all !whitelist_ssl

http_access deny http port_80 localnet !whitelist
http_access allow localnet
http_access deny all

https_port 3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myca.pem key=/etc/squid/ssl_cert/myca.pem
http_port 3128 transparent

coredump_dir /var/spool/squid

refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern .		0	20%	4320

# Test it with:

iptables -m owner --uid-owner cm -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to 127.0.0.1:3128
iptables -m owner --uid-owner cm -t nat -A OUTPUT -p tcp --dport 443 -j DNAT --to 127.0.0.1:3127

# Closing notes and thoughts

Around this section here:
http_access deny http port_80 localnet !whitelist
http_access allow localnet
http_access deny all

It looks a bit funny because we ‘allow localnet’ which typically allows our clients open access. However assessing:

ssl_bump terminate all !whitelist_ssl
http_access deny http port_80 localnet !whitelist

rules first, you see that we filter out all sites other than the whitelist with an explicit ‘deny’ or ssl ‘terminate’.

Also trying to use a proxy-aware application with the above configuration will not work because the proxy is configured in transparent / intercept mode ONLY. This is likely due to not having a normal http_port directive, this is good for me as it’s minimizing the abuse avenues.

Also for a final, final step, you need to configure your edge (or local) firewall to do destination NAT’ing back to the two Squid ports.

Block network traffic based on UID / User and GID / Group

I just found out that you can apply different IPTables rules based on UID and GID.

Just check that your kernel / iptables supports the module:

iptables -m owner --help

Which should output near the bottom like:

owner match options:
[!] --uid-owner userid[-userid]      Match local UID
[!] --gid-owner groupid[-groupid]    Match local GID
[!] --socket-exists                  Match if socket exists

Then make a rule as required. Eg. User ‘cm’ gets their web traffic transparently proxied via Squid.

iptables -m owner --uid-owner cm -t nat -A OUTPUT -i eth0 -p tcp --dport 80 -j DNAT --to 127.0.0.1:3128

Pretty cool!

sshd without-password vs prohibit-password

Upgrading a server from Debian 8 to Debian 9 - I noticed in /etc/ssh/sshd_config that ‘PermitRootLogin’ had the argument ‘prohibit-password’. Having not seen that before I wondered what the difference was between that and ‘without-password’.
Turns out that mean and do the same thing - but ‘prohibit-password’ was introduced to be less ambigous. So there you have it!

Check out the release notes here for proof :-)

Check if DNS Server can zone transfer

If you work in the ISP space you might need to check if a down or upstream server is set up to allow Zone Transfers (AXFR).

Test via:

dig -b your-dns-server-ip-with-permission-address @their-dns-server-ip-address exampleDomain.com AXFR
eg. dig -b 8.8.8.8 @208.67.222.222 exampleDomain.com AXFR

And it should return some records about the zone!