[KyOSS Discuss] Problems with network setup for openvpn

Randy Syring randy.syring at lev12.com
Fri Jun 27 12:48:58 EDT 2014


Ok, hoping someone out there can help me figure this out.  I have a 
couple servers at Linode.  I'm trying to set them up so I have a VPN 
into one of the machines and can then access all the other machines 
using the private linode network.  Public access to private services 
(SSH, etc.) would then be restricted to only those who have VPN access.

Note: I have *no firewalls* running on these servers yet.

internal server (running openvpn server)
> eth0      Link encap:Ethernet  HWaddr f2:3c:91:db:68:b4
>           inet addr:23.239.17.12  Bcast:23.239.17.255 Mask:255.255.255.0
>           inet6 addr: 2600:3c02::f03c:91ff:fedb:68b4/64 Scope:Global
>           inet6 addr: fe80::f03c:91ff:fedb:68b4/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1
>           RX packets:80780 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:102812 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:14317079 (14.3 MB)  TX bytes:17385151 (17.3 MB)
>
> eth0:1    Link encap:Ethernet  HWaddr f2:3c:91:db:68:b4
>           inet addr:192.168.137.64 Bcast:192.168.255.255  
> Mask:255.255.128.0
>           UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1
>
> tun0      Link encap:UNSPEC  HWaddr 
> 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>           inet addr:172.20.1.1  P-t-P:172.20.1.2 Mask:255.255.255.255
>           UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500  Metric:1
>           RX packets:2318 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1484 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:100
>           RX bytes:174573 (174.5 KB)  TX bytes:170941 (170.9 KB)


Comments on the above:

  * eth0 is the public interface
  * eth0:1 is the interface to the private network
  * The VPN tunnel works correctly.  From a client connected to VPN, I
    can ping 172.20.1.1 and 192.168.137.64.
  * The problem I'm having is that I can't ping anything
  * net.ipv4.ip_forward=1 is set on this server


database server (nix03):
> root at nix03:~# ifconfig
> eth0      Link encap:Ethernet  HWaddr f2:3c:91:73:d2:cc
>           inet addr:173.230.140.52 Bcast:173.230.140.255  
> Mask:255.255.255.0
>           inet6 addr: 2600:3c02::f03c:91ff:fe73:d2cc/64 Scope:Global
>           inet6 addr: fe80::f03c:91ff:fe73:d2cc/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1
>           RX packets:12348 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:44434 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:1166666 (1.1 MB)  TX bytes:5339936 (5.3 MB)
>
> eth0:1    Link encap:Ethernet  HWaddr f2:3c:91:73:d2:cc
>           inet addr:192.168.137.63 Bcast:192.168.255.255  
> Mask:255.255.128.0
>           UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1
>
>
Comments on the above:

  * eth0 is the public interface
  * eth0:1 is the interface to the private network
  * I can ping the internal server on the private interface
    (192.168.137.64).

*Current problem:*

I want to be able to hit the database server through the VPN. From my 
client, I'd like to be able to ping 192.168.137.63. However, that 
currently fails.

In my attempts to troubleshoot, I decided to approach it from the db 
server side and see if I could get ping the VPN tunnel on the internal 
server (172.20.1.1).  I realized that I would need to setup a route on 
the database server to tell it where to send packets destined for the 
172.20.1.0/24 network, so I did that:

> root at nix03:~# ip route add 172.20.1.0/24 via 192.168.137.64
> root at nix03:~# ip route list
> default via 173.230.140.1 dev eth0
> 172.20.1.0/24 via 192.168.137.64 dev eth0
> 173.230.140.0/24 dev eth0  proto kernel  scope link src 173.230.140.52
> 192.168.128.0/17 dev eth0  proto kernel  scope link src 192.168.137.63
> root at nix03:~# ip route get 172.20.1.1
> 172.20.1.1 via 192.168.137.64 dev eth0  src 192.168.137.63
>     cache
>

So, *I think* based on the above, when I ping 172.20.1.1, my server 
should send the packets to 192.168.137.64 (internal server).  That 
server should, because of ip forwarding, take the packet from eth0:1 and 
route it to tun0 (172.20.1.1).

But, as you might have guessed, pinging 172.20.1.1 from nix03 (db 
server) does not work.

I did some packet capturing to see which MAC address my ICMP packets 
were getting sent to:

> root at nix03:~# tcpdump -i eth0 -e icmp
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
> 16:41:39.623759 f2:3c:91:73:d2:cc (oui Unknown) > f2:3c:91:db:68:b4 
> (oui Unknown), ethertype IPv4 (0x0800), length 98: 192.168.137.63 > 
> 172.20.1.1: ICMP echo request, id 3324, seq 33653, length 64
> root at nix03:~# arp
> Address                  HWtype  HWaddress Flags Mask            Iface
> 192.168.137.64           ether   f2:3c:91:db:68:b4 
> C                     eth0

And that tells me the packets should be getting to the internal server.  
At least, they are being sent to the write NIC.  However, when I run 
tcpdump on internal server, I don't see any packets coming in.


What else can I try?  Thanks in advance.


*Randy Syring*
Development | Executive Director
Direct: 502.276.0459
Office: 812.285.8766
Level 12 Technologies <https://www.lev12.com/>
/Principled People | Technology that Works/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://kyoss.org/pipermail/kyoss-discuss/attachments/20140627/ed6658e9/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Level-12-Logo-email-sigs.png
Type: image/png
Size: 4071 bytes
Desc: not available
URL: <http://kyoss.org/pipermail/kyoss-discuss/attachments/20140627/ed6658e9/attachment.png>


More information about the KyOSS-Discuss mailing list