Wireguard on host (Manjaro KDE) - How to exclude KVM guest vm whonix-gateway from using host VPN?

Hello,

I’ve been at this for quite a while, and I’m unable to get this figured out.

All I’m trying to do is use my host os to continue browsing with vpn, while my guest kvm is excluded from host vpn, inorder to just connect to internet directly and use Tor.

I understand that I can can do ip route on Manjaro to get a list. And I know I can somehow add that to postup and predown in a wireguard config file in order to exclude KVM guest vm Whonix-Gateway (Using Network Source: Virtual Network "Whonix-External" : NAT and device mode is virtrio) from using host VPN. I’m just not sure how to go about it, and if anyone may please help me out I would really appreciate it.

PS. I’m not sure if it’s okay that I post all these addresses here, does that put me at risk? Please let me know and I will remove it

I was told that I could also exclude libvirt and kvm user groups from firewall rules for wireguard…But I’m not sure how to go about that either.

This is what my mullvad-ca14.conf file looks like in /etc/wireguard:

[Interface]
PrivateKey = <privatekey>
Address = 10.66.218.22/32,fc00:bbbb:bbbb:bb01::3:da15/128
DNS = 193.138.218.74

[Peer]
PublicKey = <publickey>
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = 107.181.189.206:51820

This is what ip route looks like:

default via 192.168.1.254 dev wlp2s0 proto dhcp metric 600 
10.0.2.0/24 dev virbr1 proto kernel scope link src 10.0.2.2 
192.168.1.0/24 dev wlp2s0 proto kernel scope link src 192.168.1.73 metric 600 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown

This is what my ip addr looks like with wg-quick up running a mullvad-ca14.conf file:
(I am not using systemctl enable/start wg-quick@mullvad-ca14 nor do I have the kill switch added as described in Mullvad’s FAQ for wireguard to be added to the config under [interface])

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 28:f1:0e:48:d3:b6 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e4:a7:a0:52:13:04 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.73/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp2s0
       valid_lft 64592sec preferred_lft 64592sec
    inet6 2001:569:fc87:4f00:42cc:f47d:b896:ecde/64 scope global dynamic noprefixroute 
       valid_lft 14687sec preferred_lft 14387sec
    inet6 fe80::ec9d:8da2:38d9:3647/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: virbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:cc:b7:c7 brd ff:ff:ff:ff:ff:ff
5: virbr2-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr2 state DOWN group default qlen 1000
    link/ether 52:54:00:cc:b7:c7 brd ff:ff:ff:ff:ff:ff
6: virbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:37:14:76 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.2/24 brd 10.0.2.255 scope global virbr1
       valid_lft forever preferred_lft forever
7: virbr1-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr1 state DOWN group default qlen 1000
    link/ether 5a:5e:05:20:3c:bb brd ff:ff:ff:ff:ff:ff
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:8b:1b:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:8b:1b:95 brd ff:ff:ff:ff:ff:ff
14: mullvad-ca14: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.66.213.43/32 scope global mullvad-ca14
       valid_lft forever preferred_lft forever
    inet6 fc00:bbbb:bbbb:bb01::3:d52a/128 scope global 
       valid_lft forever preferred_lft forever
15: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr1 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:01:36:c1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe01:36c1/64 scope link 
       valid_lft forever preferred_lft forever
16: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr2 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:2d:cd:9b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe2d:cd9b/64 scope link 
       valid_lft forever preferred_lft forever
17: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr2 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:c0:2d:38 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fec0:2d38/64 scope link 
       valid_lft forever preferred_lft forever

Thanks for taking the time to look at this, as I found one example (sort of), just now on reddit here and earlier I read through Whonix’s Tunneling Wiki however no where does it say how to exclude Whonix-Gateway VM from host VPN wireguard conf, and I still can not figure it out.

EDIT:
I’ve come across this post on gentoo forums, thanks to the Whonix KVM maintainer @hulahoop from whonix forum’s. After a little bit of digging around, I was able to learn a bit from that gentoo forum post, and came across this post on stackexchange regarding a bit more details. However, now I’m more concerned about the [interface[ in the wireguard config file for PostUp and most importantly PreDown, to make sure it that iptables rule gets remove when I disconnect from the wireguard connection.
Still hoping someone can help me make sense of it to do it properly. Thanks again for reading this far…

Hi,

I don’t mean to necro-post or anything, I came across while going through older posts looking for something else and seeing you don’t have a reply yet.

I suspect what you’re looking for is Split tunneling, which can be accomplished in different ways. One of which is cgroups.

I suggest you look into cgroups. Specifically running an application in a/with a specific cgroup. Thwn you can configure iptables to route the traffic from a specific cgroup different than the rest (The default way/route.)

Now, I’ve never done this, so can’t say for certain this is what you want. But based on a lot of research the last week, I suspect that’s what you’'re looking for.

Hi, and thank you for your reply, yes split tunnelling is what I was looking for, just didn’t know how what it was called. One guy on reddit suggested I pay someone whom is an expert with iptables, etc.
Meh, then I’ll never learn.
Between my post and now, Mullvad VPN released update to offer split tunnelling for customers. And its been working, however I have to select virt-manager kvm/qemu in mullvad app. BUT then it applies to the other vms I need running at the same time. So I will be looking into your suggstion/advice. Thank you again.

You’re very welcome and I really hope it works.
From what I’ve been able to gather is that you can do it with cgroups or an fwmark (Do9n’t know if they’re the same thing, but I don’t think so.

Doing it with fwmark (or just plain mark) requires you add an fwmark/mark to the packet using iptables, creating a custom routing table for said packets and then using iproute2 to route the packages with the mark/fwmark to use the custom routing tables.

Once again, I’ve never done this, so don’t know if this is the same or if this will even work, let alone have instructions on exactly how to do this.

Good luck!