Squid HTTPS interception and filtering without client certificates
I had a requirement to filter (all) web traffic on a few servers. This is typically easy with Squid and using it’s transparent proxy function. Where it gets difficult is filtering domains for HTTPS traffic.
I don’t want to SSL intercept the traffic, I don’t want to install CA certificates on the clients, I only want to filter the URLs based on a whitelist to which it can access. This is how it is done:
yum install squid # I used squid 3.5.20 /usr/lib64/squid/ssl_crtd -c -s /var/lib/ssl_db chown -R squid.squid /var/lib/ssl_db mkdir /etc/squid/ssl_cert/ chown -R squid.squid /etc/squid/ssl_cert/ cd /etc/squid/ssl_cert openssl req -new -newkey rsa:1024 -days 1365 -nodes -x509 -keyout myca.pem -out myca.pem echo "www.google.com" > /etc/squid/whitelist chmod 640 /etc/squid/whitelist chown root:squid /etc/squid/whitelist
/etc/squid/squid.conf:
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 127.0.0.1/32 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost manager http_access deny manager acl step1 at_step SslBump1 acl whitelist_ssl ssl::server_name "/etc/squid/whitelist" acl whitelist dstdomain "/etc/squid/whitelist" acl port_80 port 80 acl http proto http ssl_bump peek step1 ssl_bump splice whitelist_ssl ssl_bump terminate all !whitelist_ssl http_access deny http port_80 localnet !whitelist http_access allow localnet http_access deny all https_port 3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myca.pem key=/etc/squid/ssl_cert/myca.pem http_port 3128 transparent coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320
# Test it with:
iptables -m owner --uid-owner cm -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to 127.0.0.1:3128 iptables -m owner --uid-owner cm -t nat -A OUTPUT -p tcp --dport 443 -j DNAT --to 127.0.0.1:3127
# Closing notes and thoughts
Around this section here: http_access deny http port_80 localnet !whitelist http_access allow localnet http_access deny all
It looks a bit funny because we ‘allow localnet’ which typically allows our clients open access. However assessing:
ssl_bump terminate all !whitelist_ssl http_access deny http port_80 localnet !whitelist
rules first, you see that we filter out all sites other than the whitelist with an explicit ‘deny’ or ssl ‘terminate’.
Also trying to use a proxy-aware application with the above configuration will not work because the proxy is configured in transparent / intercept mode ONLY. This is likely due to not having a normal http_port directive, this is good for me as it’s minimizing the abuse avenues.
Also for a final, final step, you need to configure your edge (or local) firewall to do destination NAT’ing back to the two Squid ports.
Block network traffic based on UID / User and GID / Group
I just found out that you can apply different IPTables rules based on UID and GID.
Just check that your kernel / iptables supports the module:
iptables -m owner --help
Which should output near the bottom like:
owner match options: [!] --uid-owner userid[-userid] Match local UID [!] --gid-owner groupid[-groupid] Match local GID [!] --socket-exists Match if socket exists
Then make a rule as required. Eg. User ‘cm’ gets their web traffic transparently proxied via Squid.
iptables -m owner --uid-owner cm -t nat -A OUTPUT -i eth0 -p tcp --dport 80 -j DNAT --to 127.0.0.1:3128
Pretty cool!
Fast development of Grok / Logstash extractions and fields
I had the fun times of trying to write grok rules in a particular way along with a complicated pipeline. I got tried of pushing the rules and restarting logstash, there had to be a better way!
This is want I ended up doing on my development system:
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.1.rpm yum localinstall logstash-6.3.1.rpm
Create your pipeline in: /etc/logstash/conf.d/
Create the following example files:
/tmp/input.txt:
2018-07-16T01:53:28.716258+00:00 acme-host1 sshd[12522]: Disconnected from 8.8.8.8 port 37972
000-file-in.conf:
input { file { path => [ "/tmp/input.txt" ] start_position => beginning type => "test" add_field => { "sourcetype" => "test" } sincedb_path => "/dev/null" } }
25-filter.conf:
filter { if [type] == "test" { grok { match => { "message" => "%{TIMESTAMP_ISO8601} %{SYSLOGHOST:logsource} %{SYSLOGPROG}?: %{GREEDYDATA:message}" } overwrite => [ "message" ] add_tag => [ "p25vls" ] } date { locale => "en" match => [ "timestamp", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ] timezone => "UTC" } } }
999-output.conf:
output { stdout { codec => rubydebug } }
Run:
/usr/share/logstash/bin/logstash -r -f /etc/logstash/conf.d/
Give it a minute, because well Java
Now in a second window, modify you pipeline (or file 25-filter.conf etc), save it.
You should see Logstash reprocess the data from ‘/tmp/input.txt’
Happy iterational development :-)