Haproxy实现80端口复用 – Linux运维 – 运维网 – 手机版 – Powered by Discuz!

Haproxy实现80端口复用 – Linux运维 – 运维网 – 手机版 – Powered by Discuz!

第1章 安装环境

1.1 系统环境

[root@10 conf]# uname -r
2.6.32-642.4.2.el6.x86_64
[root@10 conf]# uname -m
x86_64
[root@10 conf]# cat /etc/re
readahead.conf    redhat-lsb/       redhat-release    request-key.conf  request-key.d/    resolv.conf
[root@10 conf]# cat /etc/redhat-release
CentOS release 6.8 (Final)

1.2 程序版本

1.2.1 Haproxy

[root@10 haproxy]# ./sbin/haproxy -v
HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau <w@1wt.eu>

1.2.2 Haproxy程序下载地址

第2章 Haproxy安装启动

2.1 Haproxy安装



tar xf haproxy-1.5.11.tar.gz
cd haproxy-1.5.11
make TARGET=linux26 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
备注:TARGET要指定的系统的内核版本。
2.2 Haproxy配置文件



cd /usr/local/haproxy/
mkdir conf
cd conf
vim haproxy.cfg
[root@10 conf]# cat haproxy.cfg
global
log         127.0.0.1 local0
log         127.0.0.1 local1 notice


chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon


# turn on stats unix socket
stats socket /var/lib/haproxy/stats


defaults
mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close
option                  redispatch
retries                 3
timeout http-request    10s
timeout queue           1m
timeout connect         10s
timeout client          1m
timeout server          1m
timeout http-keep-alive 10s
timeout check           10s
maxconn                 3000


frontend main
mode tcp
bind *:80
log global
option tcplog
log-format %ft %b/%s


tcp-request inspect-delay 3s
acl is_https req.payload(0,3) -m bin 160301
#GET POS(T) PUT DEL(ETE) OPT(IONS) HEA(D) CON(NECT) TRA(CE)
acl is_http req.payload(0,3) -m bin 474554 504f53 505554 44454c 4f5054 484541 434f4e 545241
#SSH
acl is_ssh req.payload(0,3) -m bin 535348
tcp-request content accept if is_http
tcp-request content accept if is_https
tcp-request content accept if is_ssh
tcp-request content accept
use_backend https if is_https
use_backend http if is_http
use_backend ssh if is_ssh
use_backend ssh


backend ssh
mode tcp
timeout server 1h
server server-ssh 127.0.0.1:22


backend http
mode tcp
server ngx01 127.0.0.1:82 maxconn 10 check inter 3s


backend https
mode tcp
server ngx02 127.0.0.1:433 maxconn 10 check inter 3s
2.3 启动Haproxy



useradd -s /sbin/nologin -M haproxy
mkdir /var/lib/haproxy
/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.cfg
第3章 测试

3.1 测试ssh服务



{17:05}~/login ➭ ssh root@10.63.101.52 -p80
Last login: Thu Oct 13 17:05:27 2016 from localhost
3.2 测试http服务

DSC0000.png

Haproxy实现80端口复用 – Linux运维 – 运维网 – 手机版 – Powered by Discuz!

你真的掌握 LVS、Nginx 及 HAProxy 的工作原理吗 – itjava – twt企业IT交流平台

你真的掌握 LVS、Nginx 及 HAProxy 的工作原理吗 – itjava – twt企业IT交流平台

当前大多数的互联网系统都使用了服务器集群技术,集群是将相同服务部署在多台服务器上构成一个集群整体对外提供服务,这些集群可以是 Web 应用服务器集群,也可以是数据库服务器集群,还可以是分布式缓存服务器集群等等。

1.jpg

在实际应用中,在 Web 服务器集群之前总会有一台负载均衡服务器,负载均衡设备的任务就是作为 Web 服务器流量的入口,挑选最合适的一台 Web 服务器,将客户端的请求转发给它处理,实现客户端到真实服务端的透明转发。

最近几年很火的「云计算」以及分布式架构,本质上也是将后端服务器作为计算资源、存储资源,由某台管理服务器封装成一个服务对外提供,客户端不需要关心真正提供服务的是哪台机器,在它看来,就好像它面对的是一台拥有近乎无限能力的服务器,而本质上,真正提供服务的,是后端的集群。

LVS、Nginx、HAProxy 是目前使用最广泛的三种软件负载均衡软件。

一般对负载均衡的使用是随着网站规模的提升根据不同的阶段来使用不同的技术。具体的应用需求还得具体分析,如果是中小型的 Web 应用,比如日 PV 小于1000万,用 Nginx 就完全可以了;如果机器不少,可以用 DNS 轮询,LVS 所耗费的机器还是比较多的;大型网站或重要的服务,且服务器比较多时,可以考虑用 LVS。

目前关于网站架构一般比较合理流行的架构方案:Web 前端采用 Nginx/HAProxy+Keepalived 作负载均衡器;后端采用 MySQ L数据库一主多从和读写分离,采用 LVS+Keepalived 的架构。

LVS

LVS 是 Linux Virtual Server 的简称,也就是 Linux 虚拟服务器。现在 LVS 已经是 Linux 标准内核的一部分,从 Linux2.4 内核以后,已经完全内置了 LVS 的各个功能模块,无需给内核打任何补丁,可以直接使用 LVS 提供的各种功能。

LVS 自从1998年开始,发展到现在已经是一个比较成熟的技术项目了。

LVS 的体系结构

2.jpg

LVS 架设的服务器集群系统有三个部分组成:

  1. 最前端的负载均衡层,用 Load Balancer 表示
  2. 中间的服务器集群层,用 Server Array 表示
  3. 最底端的数据共享存储层,用 Shared Storage 表示

LVS 负载均衡机制

LVS 不像 HAProxy 等七层软负载面向的是 HTTP 包,所以七层负载可以做的 URL 解析等工作,LVS 无法完成。

LVS 是四层负载均衡,也就是说建立在 OSI 模型的第四层——传输层之上,传输层上有我们熟悉的 TCP/UDP,LVS 支持 TCP/UDP 的负载均衡。因为 LVS 是四层负载均衡,因此它相对于其它高层负载均衡的解决办法,比如 DNS 域名轮流解析、应用层负载的调度、客户端的调度等,它的效率是非常高的。

所谓四层负载均衡 ,也就是主要通过报文中的目标地址和端口。七层负载均衡 ,也称为“内容交换”,也就是主要通过报文中的真正有意义的应用层内容。

3.jpg

LVS 的转发主要通过修改 IP 地址(NAT 模式,分为源地址修改 SNAT 和目标地址修改 DNAT)、修改目标 MAC(DR 模式)来实现。

NAT 模式:网络地址转换

NAT(Network Address Translation)是一种外网和内网地址映射的技术。

NAT 模式下,网络数据报的进出都要经过 LVS 的处理。LVS 需要作为 RS(真实服务器)的网关。

当包到达 LVS 时,LVS 做目标地址转换(DNAT),将目标 IP 改为 RS 的 IP。RS 接收到包以后,仿佛是客户端直接发给它的一样。RS 处理完,返回响应时,源 IP 是 RS IP,目标 IP 是客户端的 IP。这时 RS 的包通过网关(LVS)中转,LVS 会做源地址转换(SNAT),将包的源地址改为 VIP,这样,这个包对客户端看起来就仿佛是 LVS 直接返回给它的。

4.jpg

DR 模式:直接路由

DR 模式下需要 LVS 和 RS 集群绑定同一个 VIP(RS 通过将 VIP 绑定在 loopback 实现),但与 NAT 的不同点在于:请求由 LVS 接受,由真实提供服务的服务器(RealServer,RS)直接返回给用户,返回的时候不经过 LVS。

详细来看,一个请求过来时,LVS 只需要将网络帧的 MAC 地址修改为某一台 RS 的 MAC,该包就会被转发到相应的 RS 处理,注意此时的源 IP 和目标 IP 都没变,LVS 只是做了一下移花接木。RS 收到 LVS 转发来的包时,链路层发现 MAC 是自己的,到上面的网络层,发现 IP 也是自己的,于是这个包被合法地接受,RS 感知不到前面有 LVS 的存在。而当 RS 返回响应时,只要直接向源 IP(即用户
IP)返回即可,不再经过 LVS。

5.jpg

DR 负载均衡模式数据分发过程中不修改 IP 地址,只修改 mac 地址,由于实际处理请求的真实物理 IP 地址和数据请求目的 IP 地址一致,所以不需要通过负载均衡服务器进行地址转换,可将响应数据包直接返回给用户浏览器,避免负载均衡服务器网卡带宽成为瓶颈。因此,DR 模式具有较好的性能,也是目前大型网站使用最广泛的一种负载均衡手段。

LVS 的优点

  • 抗负载能力强、是工作在传输层上仅作分发之用,没有流量的产生,这个特点也决定了它在负载均衡软件里的性能最强的,对内存和 cpu 资源消耗比较低。
  • 配置性比较低,这是一个缺点也是一个优点,因为没有可太多配置的东西,所以并不需要太多接触,大大减少了人为出错的几率。
  • 工作稳定,因为其本身抗负载能力很强,自身有完整的双机热备方案,如 LVS+Keepalived。
  • 无流量,LVS 只分发请求,而流量并不从它本身出去,这点保证了均衡器 IO 的性能不会受到大流量的影响。
  • 应用范围比较广,因为 LVS 工作在传输层,所以它几乎可以对所有应用做负载均衡,包括 http、数据库、在线聊天室等等。

LVS 的缺点

  • 软件本身不支持正则表达式处理,不能做动静分离;而现在许多网站在这方面都有较强的需求,这个是 Nginx、HAProxy+Keepalived 的优势所在。
  • 如果是网站应用比较庞大的话,LVS/DR+Keepalived 实施起来就比较复杂了,相对而言,Nginx/HAProxy+Keepalived就简单多了。

Nginx

Nginx 是一个强大的 Web 服务器软件,用于处理高并发的 HTTP 请求和作为反向代理服务器做负载均衡。具有高性能、轻量级、内存消耗少,强大的负载均衡能力等优势。

6.jpg

Nignx 的架构设计

相对于传统基于进程或线程的模型(Apache就采用这种模型)在处理并发连接时会为每一个连接建立一个单独的进程或线程,且在网络或者输入/输出操作时阻塞。这将导致内存和 CPU 的大量消耗,因为新起一个单独的进程或线程需要准备新的运行时环境,包括堆和栈内存的分配,以及新的执行上下文,当然,这些也会导致多余的 CPU 开销。最终,会由于过多的上下文切换而导致服务器性能变差。

反过来,Nginx 的架构设计是采用模块化的、基于事件驱动、异步、单线程且非阻塞。

Nginx 大量使用多路复用和事件通知,Nginx 启动以后,会在系统中以 daemon 的方式在后台运行,其中包括一个 master 进程,n(n>=1) 个 worker 进程。所有的进程都是单线程(即只有一个主线程)的,且进程间通信主要使用共享内存的方式。

其中,master 进程用于接收来自外界的信号,并给 worker 进程发送信号,同时监控 worker 进程的工作状态。worker 进程则是外部请求真正的处理者,每个 worker 请求相互独立且平等的竞争来自客户端的请求。请求只能在一个 worker 进程中被处理,且一个 worker 进程只有一个主线程,所以同时只能处理一个请求。(原理同 Netty 很像)

7.jpg

Nginx 负载均衡

Nginx 负载均衡主要是对七层网络通信模型中的第七层应用层上的 http、https 进行支持。

Nginx 是以反向代理的方式进行负载均衡的。反向代理(Reverse Proxy)方式是指以代理服务器来接受 Internet 上的连接请求,然后将请求转发给内部网络上的服务器,并将从服务器上得到的结果返回给 Internet 上请求连接的客户端,此时代理服务器对外就表现为一个服务器。

Nginx 实现负载均衡的分配策略有很多,Nginx 的 upstream 目前支持以下几种方式:

  • 轮询(默认):每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器 down 掉,能自动剔除。
  • weight:指定轮询几率,weight 和访问比率成正比,用于后端服务器性能不均的情况。
  • ip_hash:每个请求按访问 ip 的 hash 结果分配,这样每个访客固定访问一个后端服务器,可以解决 session 的问题。
  • fair(第三方):按后端服务器的响应时间来分配请求,响应时间短的优先分配。
  • url_hash(第三方):按访问 url 的 hash 结果来分配请求,使每个 url 定向到同一个后端服务器,后端服务器为缓存时比较有效。

Nginx 的优点

  • 跨平台:Nginx 可以在大多数 Unix like OS编译运行,而且也有 Windows 的移植版本
  • 配置异常简单:非常容易上手。配置风格跟程序开发一样,神一般的配置
  • 非阻塞、高并发连接:官方测试能够支撑5万并发连接,在实际生产环境中跑到2~3万并发连接数
  • 事件驱动:通信机制采用 epoll 模型,支持更大的并发连接
  • Master/Worker 结构:一个 master 进程,生成一个或多个 worker 进程
  • 内存消耗小:处理大并发的请求内存消耗非常小。在3万并发连接下,开启的10个 Nginx 进程才消耗150M 内存(15M*10=150M)
  • 内置的健康检查功能:如果 Nginx 代理的后端的某台 Web 服务器宕机了,不会影响前端访问
  • 节省带宽:支持
    GZIP 压缩,可以添加浏览器本地缓存的 Header 头
  • 稳定性高:用于反向代理,宕机的概率微乎其微

Nginx 的缺点

  • Nginx 仅能支 持http、https 和 Email 协议,这样就在适用范围上面小些,这个是它的缺点
  • 对后端服务器的健康检查,只支持通过端口来检测,不支持通过 ur l来检测。不支持 Session 的直接保持,但能通过 ip_hash 来解决

HAProxy

HAProxy 支持两种代理模式 TCP(四层)和HTTP(七层),也是支持虚拟主机的。

HAProxy 的优点能够补充 Nginx 的一些缺点,比如支持 Session 的保持,Cookie 的引导;同时支持通过获取指定的 url 来检测后端服务器的状态。

HAProxy 跟 LVS 类似,本身就只是一款负载均衡软件;单纯从效率上来讲 HAProxy 会比 Nginx 有更出色的负载均衡速度,在并发处理上也是优于 Nginx 的。

HAProxy 支持 TCP 协议的负载均衡转发,可以对 MySQL 读进行负载均衡,对后端的 MySQL 节点进行检测和负载均衡,大家可以用 LVS+Keepalived 对 MySQL 主从做负载均衡。

HAProxy 负载均衡策略非常多:Round-robin(轮循)、Weight-round-robin(带权轮循)、source(原地址保持)、RI(请求URL)、rdp-cookie(根据cookie)。

你真的掌握 LVS、Nginx 及 HAProxy 的工作原理吗 – itjava – twt企业IT交流平台

manty's blog: haproxy as a very very overloaded sslh

manty’s blog: haproxy as a very very overloaded sslh

After using haproxy at work for some time I realized that it can be configured for a lot of things, for example: it knows about SNI (on ssl is the method we use to know what host the client is trying to reach so that we know what certificate to present and thus we can multiplex several virtual hosts on the same ssl IP:port) and it also knows how to make transparent proxy connections (the connections go through haproxy but the ending server will think they are arriving directly from the client, as it will see the client’s IP as the source IP of the packages).

With this two little features, which are available on haproxy 1.5 (Jessie’s version has them all), I thought I could give it a try to substitute sslh with haproxy giving me a lot of possibilities that sslh cannot do.

Having this in mind I thought I could multiplex several ssl services, not only https but also openvpn or similar, on the 443 port and also allow this services to arrive transparently to the final server. Thus what I wanted was not to mimic sslh (which can be done with haproxy) but to get the semantic I needed, which is similar to sslh but with more power and with a little different behaviour, cause I liked it that way.

There is however one caveat that I don’t like about this setup and it is that to achieve the transparency one has to run haproxy as root, which is not really something one likes :-( so, having transparency is great, but we’ll be taking some risks here which I personally don’t like, to me it isn’t worth it.

Anyway, here is the setup, it basically consists of a setup on haproxy but if we want transparency we’ll have to add to it a routing and iptables setup, I’ll describe here the whole setup

Here is what you need to define on /etc/haproxy/haproxy.cfg:

frontend ft_ssl bind 192.168.0.1:443 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } acl sslvpn req_ssl_sni -i vpn.example.net use_backend bk_sslvpn if sslvpn use_backend bk_web if { req_ssl_sni -m found } default_backend bk_ssh backend bk_sslvpn mode tcp source 0.0.0.0 usesrc clientip server srvvpn vpnserver:1194 backend bk_web mode tcp source 0.0.0.0 usesrc clientip server srvhttps webserver:443 backend bk_ssh mode tcp source 0.0.0.0 usesrc clientip server srvssh sshserver:22

An example of a transparent setup can be found here but lacks some details, for example, if you need to redirect the traffic to the local haproxy you’ll want to use the xt_TPROXY, there is a better doc for that at squid’s wiki. Anyway, if you are playing just with your own machine, like we typically do with sslh, you won’t need the TPROXY power, as packets will come straight to your 443, so haproxy will be able to get the without any problem. The problem will come if you are using transparency (source 0.0.0.0 usesrc clientip) because then packets coming out of haproxy will be carrying the ip of the real client, and thus the answers of the backend will go to that client (but with different ports and other tcp data), so it will not work. We’ll have to get those packets back to haproxy, for that what we’ll do is mark the packages with iptables and then route them to the loopback interface using advanced routing. This is where all the examples will tell you to use iptables’ mangle table with rules marking on PREROUTING but that won’t work out if you are having all the setup (frontend and backends) in just one box, instead you’ll have to write those rules to work on the OUTPUT chain of the mangle table, having something like this:

*mangle :PREROUTING ACCEPT :INPUT ACCEPT :FORWARD ACCEPT :OUTPUT ACCEPT :POSTROUTING ACCEPT :DIVERT – -A OUTPUT -s public_ip -p tcp –sport 22 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp –sport 443 -o public_iface -j DIVERT -A OUTPUT -s public_ip -p tcp –sport 1194 -o public_iface -j DIVERT -A DIVERT -j MARK –set-mark 1 -A DIVERT -j ACCEPT COMMIT

Take that just as an example, better suggestions on how to know what traffic to send to DIVERT are welcome. The point here is that if you are sending the service to some other box you can do it on PREROUTIING, but if you are sending the service to the very same box of haproxy you’ll have to mark the packages on the OUTPUT chain.

Once we have the packets marked we just need to route them, something like this will work out perfectly:

ip rule add fwmark 1 lookup 100 ip route add local 0.0.0.0/0 dev lo table 100

And that’s all for this crazy setup. Of course, if, like me, you don’t like the root implication of the transparent setup, you can remove the “source 0.0.0.0 usesrc clientip” lines on the backends and forget about transparency (connections to the backend will come from your local IP), but you’ll be able to run haproxy with dropped privileges and you’ll just need the plain haproxy.cfg setup and not the weird iptables and advanced routing setup.

Hope you like the article, btw, I’d like to point out the main difference of this setup vs sslh, it is that I’m only sending the packages to the ssl providers if the client is sending SNI info, otherwise I’m sending them to the ssh server, while sslh will send ssl clients without SNI also to the ssl provider. If your setup mimics sslh and you want to comment on it, feel free to do it.

manty's blog: haproxy as a very very overloaded sslh

SSH over SSL, episode 4: a HAproxy based configuration

SSH over SSL, episode 4: a HAproxy based configuration

Purpose of this article

In this article, I am describing how to SSH to a remote server as discreetly as possible, by concealing the SSH packets into SSL. The server will still be able to run an SSL website.

Rationale

In most cases, when your outgoing firewall blocks ssh, you can work around with sslh, a tool that listens on the port 443 server-side, and selectively forwards, depending on the packet type, the incoming TCP connections to a local SSH or SSL service. You can then happily ssh to your server on the port 443 (normally dedicated to HTTPS), and also run a website on the same server so your connections look you are just harmlessly visiting this website. However, if your firewall is really sneaky, it will detect that you are sending the wrong packet type to the SSL port, and block your connection. In this case, there is not much choice: we must hide the SSH connection into a real SSL tunnel.

Comment for the long time readers

I know, I know: I covered this topic a few times already (here are the first, second, and third episodes). All of these setups were relying on a feature of HTTP 1.1 called CONNECT. However, it turns out that most webserver do not implement this CONNECT feature. As a consequence, if you wanted to do this, you were more or less stuck with Apache. This time, we are breaking free from Apache, with a HAproxy-based configuration. We will use HAproxy advanced packet inspection capabilities to implement a switch of protocol, the same way sslh works.

Server configuration

Some assumptions:

  • The port 443 of your server is publicly reachable
  • It runs ssh (but no need for the port 22 to be reachable)
  • Some web server is running on the port 80 and it supports the ‘X-Forwarded-Proto’ header (see the documentation of your webserver to enable that).
  • You have generated ssl certificates (you copied the public key and the private key in the same file /etc/ssl/private/certs.pem)

Now, you need to setup HAproxy. HAproxy defines backends and frontends, and it can communicate with these backends both at the HTTP and at the TCP level. Let us start with the backends:

The web server backend: we tell HAproxy that a server is running on the port 80, and speaks HTTP. On this backend, we add a X-Forwarded-Proto header, such that the web server knows that the clients are connecting securely. If you expose the same backend with HAproxy on the port 80, don’t forget to filter the X-Forwarded-Proto header!

backend secure_http      reqadd X-Forwarded-Proto: https      rspadd Strict-Transport-Security: max-age=31536000      mode http      option httplog      option forwardfor      server local_http_server 127.0.0.1:80  

The ssh server:

backend ssh      mode tcp      option tcplog      server ssh 127.0.0.1:22      timeout server 2h  

And now, the magic. This happens in the frontend section. We listen in TCP mode and inspect the connections. Depending on whether we see ssh or not, we hook it to one of the backends.

frontend ssl      bind X.X.X.X:443 ssl crt /etc/ssl/private/certs.pem no-sslv3      mode tcp      option tcplog      tcp-request inspect-delay 5s      tcp-request content accept if HTTP        acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30        use_backend ssh if !HTTP      use_backend ssh if client_attempts_ssh      use_backend secure_http if HTTP  

Once you are done, you can test if this works by connecting on the server with openssl.

openssl s_client -connect server.com:443 -quiet  

If you see a string that looks like

SSH-2.0-OpenSSH_6.6.1p1 Debian-7  

then everything went fine!

Connecting from an SSH client

To connect to your server from linux, just drop this in your ~/.ssh/config:

Host server.com      ProxyCommand openssl s_client -connect server.com:443 -quiet  

If you are on windows and you cannot install anything client side, there is also a solution for you. Download socat and putty (none of them requires admin rights). Then, with socat, run:

socat TCP-LISTEN:8888 OPENSSL:server.com:443,verify=0  

And with putty, direct your client to 127.0.0.1 on the port 8888.

For the technically aware readers

So how does this work exactly? Basically, the RFC 4253, section 4.2 states that clients must send a string that starts with ‘SSH-2.0’ (this is also how sslh does it). Moreover, 5353482d322e30 is the binary representation of the string ‘SSH-2.0’. So everything boils down to this line:

acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30  

When a new connection is made on the port 443, HAproxy decrypts the SSL layer, and checks whether the stream of data sent by the client starts with this string. We use the result of this condition to choose the backend. This handles the case of ‘active’ SSH clients (like openssh-client on linux), who send a packet as soon as they connect. There are also ‘passive’ SSH clients (like putty), who wait for the server to send a string. These clients will get that string after 5 seconds (the inspect-delay).

Conclusion

Happy SSH!

SSH over SSL, episode 4: a HAproxy based configuration

家里没公网 ip 的新选择

by luodaoyi

V2EX – 技术 / 2017-11-14 13:09

我的就没有。。。

我办理的联通的宽带,但是小区木有联通宽带 然后给弄的华数宽带

后来我发现华数宽带可以—多拨。。上下行都叠加

但是还是没有公网 ip。我家里有个 gen8 服务器,直接装了 dsm

于是找了个所谓 ipv4 的玩意

然后把盒子 wan 扣连接到路由器 lan 口插到 gen8 的 dsm 上。设置双网口

跑了 docker

dsm 6.x 可以用虚拟机 就跑了个 arch

用上面 docker 中的 harpoxy 炮三层 tcp 代理,就可以在办公室远程连接 arch 了

dsm 还有个 web station 可以直接跑 php

我的小博客就是跑在这个上面的https://luodaoyi.com

因为 ip 是国内的 所以必须要 beian

还有。。。 Let’s Encrypt 也可以直接申请证书了。。因为 80 443 都可以用了

Shared via Inoreader

家里没公网 ip 的新选择