使用 nginx / docker 将 next.js 部署到 openshift

Deploying next.js to openshift using nginx / docker

提问人:Before The Empire 提问时间:11/18/2023 最后编辑:Before The Empire 更新时间:11/20/2023 访问量:29

问:

赏金将在 7 天后到期。这个问题的答案有资格获得 +150 声望赏金。在帝国从信誉良好的来源寻找答案之前。

当使用默认的 next.js 导出类型时,使用 docker 和 nginx 部署到 openshift 时,我被禁止使用 nginx 403。

如果我执行next.config.js输出:“导出,我得到网站,但不时遇到503刷新页面或发生客户端应用程序错误。

如果我在没有输出“export”的情况下默认,我会得到 403。

理想情况下,我希望在没有静态导出的情况下进行部署。我正在使用一个使用 node 的 docker 容器,部署到 openshift。

这是错误。

enter image description here

这是我的 nginx.conf 文件。

# Set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto;

# Load perl module for env var subsitution
load_module modules/ngx_http_perl_module.so;

# read in env variable
env runtimeEnvironment;
env applicationName;

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

# change pid loc for lower privileged account
pid        /tmp/nginx.pid;

# Log errors/emerg/crit to syslog (splunk)
# Log crit events to local container output
error_log syslog:server=splunk-dmz.nvthbs.local:21514 error;
error_log /var/log/nginx/error.log crit;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how many clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimization to serve multiple clients per thread
    use epoll;

    # enable multiple connections
    multi_accept on;
}

http {
    # set env vars
    perl_set $runtimeenv 'sub { return $ENV{"runtimeEnvironment"};}';
    perl_set $appName 'sub { return $ENV{"applicationName"};}';

    # limit the number of connections per single IP - DDOS Protection
    limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

    # limit the number of requests for a given session - DDOS Protection
    limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

    # if the request body size is more than the buffer size, then the entire (or partial)
    # request body is written into a temporary file
    # affects any POST actions sent to nginx
    client_body_buffer_size  128k;

    # buffer size for reading client request header
    client_header_buffer_size 10K;

    # maximum number and size of buffers for large headers to read from client request
    large_client_header_buffers 4 256k;

    # time a server will wait for client header to be sent after request
    client_header_timeout 12;

    # time a server will wait for client body to be sent after request
    client_body_timeout 12;

    # timeout for keep-alive connections to client
    keepalive_timeout 30;

    # timeout between two reads - keeps memory free -- default 60
    send_timeout 10;

    # set log format for access logs
    log_format  custom 'Application:$appname Environment:$runtimeenv '
                              '- $remote_addr - $remote_user [$time_local] '
                                 '"$request" $status $body_bytes_sent '
                                 '"$http_referer" "$http_user_agent" '
                                 '"$http_x_forwarded_for" $request_id ';
    
    # Condition map to reduce access logging and eliminate 200 / 300
    map $status $loggable {
        ~^[23]  0;
        default 1;
    }

    # Access log configuration with conditional statement applied to reduce logging events
    # access_log syslog:server=splunk-dmz.nvthbs.local:21514,facility=local7,tag=nginx custom if=$loggable;

    # access logging all events
    access_log syslog:server=splunk-dmz.nvthbs.local:21514,facility=local7,tag=nginx custom;

    # copies data between one FD and other from within the kernel
    # faster than read() + write()
    sendfile on;

    # send headers in one piece - more performant
    tcp_nopush on;

    # don't buffer data sent, good for small data bursts in real time
    tcp_nodelay on;

    # enable gzip compression
    gzip on;
    gzip_min_length 1024;
    gzip_comp_level 2;
    gzip_vary on;
    gzip_disable msie6;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types
        # text/html is always compressed by HttpGzipModule
        text/css
        text/javascript
        text/xml
        text/plain
        text/x-component
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/rss+xml
        application/atom+xml
        font/truetype
        font/opentype
        application/vnd.ms-fontobject
        image/svg+xml;

    # allow the server to close connection on non responding client, this will free up memory
    reset_timedout_connection on;

    # number of requests client can make over keep-alive
    # having high value can be especially beneficial in testing load generation
    # Consider evaluating for production
    keepalive_requests 100000;

    # Set temp paths - need to be explicitly set to /tmp due to permissions
    proxy_temp_path /tmp/proxy_temp;
    client_body_temp_path /tmp/client_temp;
    fastcgi_temp_path /tmp/fastcgi_temp;
    uwsgi_temp_path /tmp/uwsgi_temp;
    scgi_temp_path /tmp/scgi_temp;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

  server {
    # zone which we want to limit by upper values, we want limit whole server
      limit_conn conn_limit_per_ip 10;
      limit_req zone=req_limit_per_ip burst=10 nodelay;

      listen 8080;
      listen 5000 ssl;
      ssl_certificate    /etc/ssl/crt/tls.crt; 
      ssl_certificate_key    /etc/ssl/key/tls.key;
      ssl_protocols     TLSv1.2;

      default_type application/octet-stream;

      root /usr/share/nginx/apps;

      location / {
        try_files $uri $uri/ /index.html =404;
        add_header 'Access-Control-Allow-Origin' '*' always;
      }

       # To allow POST on static pages
        error_page  405     =200 $uri;
  }
}

如果不使用输出:“导出”,我如何不获取 403 或将日志记录添加到 openshift,或者我如何在某处记录 nginx 文件,以便我可以看到它的哪一部分导致了 403。

next.config.js

const nextConfig = {
output: 'export'
distDir: 'dist'
}

module.exports = nextConfig;

Docker文件

## Stage 0 : Set Base Image and Version ##
## Note: By default, these are pulled from the myorg-images project in the OpenShift repo. 
##       This requires auth orization to OpenShift via a docker login command prior to build.
##       To build from the public Node image, replace the following ARGs and uncomment all
##       segments labeled PUBLIC NODE
##       ARG nodeBaseImage=node
##       ARG nodeBaseVersion=latest
ARG nodeBaseImage=default-route-openshift-image-registry.apps.oscp2.myorg.com/myorg-images/myorg_node
ARG nodeBaseVersion=18.17.1

## Stage 1 : Build ##
FROM ${nodeBaseImage}:${nodeBaseVersion} AS node
LABEL maintainer="DevOps <[email protected]>"
EXPOSE 4200
ARG build=true 

##### PUBLIC NODE #####
## Note: Uncomment the following if using the public, non-myorged node image
##       ENV CHROME_BIN=/usr/bin/chromium-browser
##       ENV SASS_BINARY_NAME=linux-x64-67
##       RUN wget -q -O - https://dl.google.com/linux/linux_signing_key.pub d| apt-key add - && \
##           echo 'deb http://dl.google.com/linux/chrome/deb/ stable main' >> /etc/apt/sources.list && \
##           apt-get update && apt-get install --no-install-recommends -y google-chrome-stable
##       RUN npm install -g next
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
COPY package.json package-lock.json .npmrc ./
RUN npm ci && mkdir /app && mv ./node_modules ./app
WORKDIR /app
COPY . .

## Build the next app in production mode and store the artifacts in out folder
## Note: Subfolders match the runtimeEnvironment parameter in OpenShift
RUN if [ "$build" = "true" ]; then \
npm run build --output-path=app/Test && \
npm run build --output-path=app/Stage && \
npm run build --output-path=app/Production; fi

### Default entrypoint references start-docker script in package.json
##ENTRYPOINT [ "next", "start" ]
ENTRYPOINT [ "npm", "start", "start-docker" ]

nginx的。Docker文件

## Stage 0 : Set Base Image and Version ##
## Note: nodeImageName and buildID refer to the image produced by the other Dockerfile in this
##       project. nodeImageName is a REQUIRED argument to run this build. ##
##       nginxBaseImage is the Myorged nginx image. This can be replaced with 'nginx' to use
##       the standard public nginx image. This does not include Myorg certificates or OS permissions
ARG nginxBaseImage=default-route-openshift-image-registry.apps.oscp2.myorg.com/myorg-images/myorg_nginx
ARG nodeImageName
ARG buildID=latest
ARG nginxVersion=1.17.10-perl

## STAGE 1: Create Builder Image from app image ##
FROM ${nodeImageName}:${buildID} AS builder

## STAGE 2: Create web server image ##
FROM ${nginxBaseImage}:${nginxVersion} as final
LABEL maintainer="DevOps <[email protected]>"
ENV runtimeEnvironment=Development
EXPOSE 5000
EXPOSE 8080

## Remove default nginx website ##
RUN rm -rf /usr/share/nginx/html


## From ‘builder’ stage copy over the artifacts in dist folder ##
COPY --from=builder /app /usr/share/nginx/app

## From 'builder' stage copy over nginx config file ##
COPY nginx.conf /etc/nginx/nginx.conf

## Set user context to www-data for non OpenShift envs ##
USER www-data

## Container launch command creates symbolic link between environment build and default nginx website ##
CMD ["/bin/bash","-c","ln -s /usr/share/nginx/app/${runtimeEnvironment} /usr/share/nginx/html && nginx -g 'daemon off;'"]
docker nginx next.js Openshift

评论


答:

0赞 VonC 11/20/2023 #1

你有:

[Next.js App] --build--> [Node Docker Container]
       |
       |--copy artifacts--> [NGINX Docker Container]
                                |
                                |--deploy--> [OpenShift Environment]

我将从修改文件开始,以正确为 Next.js 应用程序提供服务。确保 root 指令指向 Next.js 生成项目位于 NGINX 容器中的目录。nginx.conf

要向 OpenShift 添加日志记录并调查 403 错误的原因,请将 NGINX 配置为将日志写入您可以访问的路径。例如,将 和 指令设置为文件路径而不是 .error_logaccess_logsyslog

server {
    listen 8080;
    listen 5000 ssl;
    ...
    root /usr/share/nginx/app;  # Update this to the correct path of your Next.js build

    location / {
        try_files $uri $uri/ /index.html;  # Make sure this line correctly serves your Next.js app
        ...
    }

    error_log /path/to/your/nginx/error.log warn;  # Update log file paths
    access_log /path/to/your/nginx/access.log warn;
}

在部署之前,请测试 NGINX 配置,以确保没有语法错误。nginx -t

在 NGINX Dockerfile 中,确保命令正确地将生成的 Next.js 应用从 Node 容器传输到 NGINX 容器中的正确目录。COPY

...
## From ‘builder' stage copy over the artifacts in dist folder ##
COPY --from=builder /app/.next /usr/share/nginx/app  # Make sure this path is correct

## From 'builder' stage copy over nginx config file ##
COPY nginx.conf /etc/nginx/nginx.conf
...

如果不想使用静态导出 (),请确保将 Next.js 应用程序配置为处理服务器端呈现。然后,NGINX 配置应指向服务器呈现的页面。output: 'export'

“然后,NGINX配置应指向服务器呈现的页面。
你能澄清一下吗?我试着不做
output: 'export'

当您选择不在 next.config.js 中使用 output: 'export' 时,这意味着您依赖于 Next.js 的服务器端渲染功能,而不是将您的网站导出为静态 HTML。

如果省略 ,Next.js 默认为服务器端呈现 (SSR)静态站点生成 (SSG),具体取决于您的页面配置。
这意味着 Next.js 将在运行时 (SSR) 或构建时 (SSG) 处理服务器上的渲染页面,而不是将所有页面预生成为静态 HTML。
output: 'export'

在此方案中,NGINX 充当 Node.js 服务器的反向代理,该服务器负责动态呈现页面。

[Client Request]
       |
       v
[NGINX Container] --proxy pass--> [Node.js Container (Next.js App)]
       |                                  |
       |                                  | --SSR/SSG Rendering
       |                                  |
       |<------------response-------------/
       |
       v
[OpenShift Environment]

服务器端渲染的 NGINX 配置将涉及代理传递:NGINX 不应提供静态文件,而应将请求转发到 Node.js 服务器。这是通过指令实现的。
NGINX 配置中的位置块应包含用于处理请求代理的设置。
proxy_pass/

server {
    listen 8080;
    listen 5000 ssl;
    ...
    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://nodejs:3000;  # Replace with the appropriate internal URL to your Node.js server
    }

    ...
}

在此配置中,告诉 NGINX 将所有传入请求转发到运行 Next.js 应用程序的 Node.js 服务器。该部分应替换为 Docker 网络或 OpenShift 环境中 Node.js 服务器的实际位置和端口。
这些行确保 Node.js 服务器接收有关原始请求的重要信息,例如客户端的真实 IP 地址。
proxy_pass http://nodejs:3000;nodejs:3000proxy_set_header

关于 Docker 和 OpenShift,请确保 NGINX 容器可以与 Node.js 容器通信。这通常涉及 Docker 网络配置或 OpenShift 服务设置。
并确保 Docker 容器内的 Node.js 服务器正在侦听您在指令中指定的端口。
proxy_pass

这将允许您的Next.js应用程序从服务器端渲染中受益,根据每个请求动态生成页面,而NGINX处理传入流量并适当转发它。
没有了。
output: 'export'

评论

0赞 Before The Empire 11/21/2023
“然后,NGINX配置应指向服务器呈现的页面。”你能澄清一下吗?我尽量不做输出:“导出”
0赞 VonC 11/21/2023
@BeforeTheEmpire 当然,我已经编辑了答案,提出了一种没有 和 的方法。output: 'export'