How to deal with mixed content in a website which should be secured as https?

19,322

Like others have said, you should serve all the content over HTTPS.

You could use http proxy to do this. This means that server A will handle the HTTPS connection and forward the request to server B over HTTP. HTTP will then send the response back to server A, which will update the response headers to make it look like the response came from server A itself and forward the response to the user.

You would make each of your apps on server B available on a url on domain A, for instance https://www.domain-a.com/appOnB1 and https://www.domain-a.com/appOnB2. The proxy would then forward the requests to the right port on server B.

For Apache this would mean two extra lines in your configuration per app:

ProxyPass "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"
ProxyPassReverse "/fooApp" "http://IP_ADDR_OF_SERVER_B:PORT"

The first line will make sure that Apache forwards this request to server B and the second line will make sure that Apache changes the address in the HTTP response headers to make it look like the response came from server A instead of server B.

As you have a requirement to make this proxy dynamic, it might make more sense to set this proxy up inside your NodeJS app on server A, because that app probably already has knowledge about the different apps that live on server B. I'm no NodeJS expert, but a quick search turned up https://github.com/nodejitsu/node-http-proxy which looks like it would do the trick and seems like a well maintained project.

The general idea remains the same though: You make the apps on server B accessible through server A using a proxy, using server A's HTTPS set-up. To the user it will look like all the apps on server B are hosted on domain A.

After you set this up you can use https://DOMAIN_NAME_OF_SERVER_A/fooApp as the url for your iFrame to load the apps over HTTPS.

Warning: You should only do this if you can route this traffic internally (server A and B can reach each other on the same network), otherwise traffic could be intercepted on its way from server A to server B.

Share:
19,322
librae
Author by

librae

Keep It Simple, Stupid.

Updated on July 13, 2022

Comments

  • librae
    librae almost 2 years

    I am building a website on server A (with domain name registered), used for people to create and run their "apps".
    These "apps" are actually docker containers running on server B, in the container, there lives a small web app which can be accessed directly like:

    http://IP_ADDR_OF_SERVER_B:PORT
    

    The PORT is a random big number one which maps to the docker container. Now I can make SSL certificate working on server A, so that it works fine by accessing:

    https://DOMAIN_NAME_OF_SERVER_A
    

    The problem is, I enclosed the "apps" in iframe by accessing "http" like above, therefore my browser(Chrome) refuse to open it and report error as:

    Mixed Content: The page at 'https://DOMAIN_NAME_OF_SERVER_A/xxx' was loaded over HTTPS, but requested an insecure resource 'http://IP_ADDR_OF_SERVER_B:PORT/xxx'. This request has been blocked; the content must be served over HTTPS.

    So, how should I deal with such issue?
    I am a full stack green hand, I'd appreciate a lot if you can share some knowledge on how to build a healthy https website while solving such problem in a proper way.


    Supplementary explanation

    Ok I think I just threw out the outline of the question, here goes more details.

    I see it is intact and straight forward to make the iframe requests to be served with https, then it won't confuse me anymore.

    However the trouble is, since all the "apps" are dynamically created/removed, it seems I'll need to prepare many certificates for each one of them.

    Will self signed certificate work without being blocked or complained by the browser? Or do I have a way to serve all the "apps" with one SSL certificate?


    Software environment

    Server A: Running node.js website listening to port 5000 and served with Nginx proxy_pass.

    server {
        listen 80;
        server_name DOMAIN_NAME_OF_SERVER_A;
    
        location / {
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   Host      $http_host;
            proxy_pass     http://127.0.0.1:5000;
        }
    }
    server {
        listen 443;
        server_name DOMAIN_NAME_OF_SERVER_A;
    
        ssl on;
        ssl_certificate /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.cer;
        ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_A.key;
        ssl_session_timeout 5m;
        location / {
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   Host      $http_host;
            proxy_pass     http://127.0.0.1:5000;
        }
    }
    

    Server B: Running node.js apps listening to different random big port numbers such as 50055, assigned dynamically when "apps" are created. (In fact these apps are running in docker containers while I think it doesn't matter) Can run Nginx if needed.

    Server A and Server B talk with each other in public traffic.


    Solution

    Just as all the answers, especially the one from @eawenden, I need a reverse proxy to achieve my goal.

    In addition, I did a few more things:
    1. Assign a domain name to Server B for using a letsencrypt cert.
    2. Proxy predefined url to specific port.

    Therefore I setup a reverse proxy server using nginx on Server B, proxy all the requests like:

    https://DOMAIN_NAME_OF_SERVER_B/PORT/xxx
    

    to

    https://127.0.0.1:PORT/xxx
    

    Ps: nginx reverse proxy config on Server B

    server {
        listen 443;
        server_name DOMAIN_NAME_OF_SERVER_B;
    
        ssl on;
        ssl_certificate     /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.cer;
        ssl_certificate_key /etc/nginx/ssl/DOMAIN_NAME_OF_SERVER_B.key;
        ssl_session_timeout 5m;
    
        rewrite_log off;
        error_log   /var/log/nginx/rewrite.error.log info;
    
        location ~ ^/(?<port>\d+)/ {
            rewrite ^/\d+?(/.*) $1 break;
            proxy_pass http://127.0.0.1:$port;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400;
        }
    }
    

    Thus everything seems to be working as expected!
    Thanks again to all the answerers.

  • librae
    librae over 7 years
    Understand, please see my supplementary explanation to my question.
  • librae
    librae over 7 years
    Thank you for the suggestion!
  • librae
    librae over 7 years
    It came to my brain but I didn't make it clear enough so still not tried yet. Guess I need to make the request to be something like: https://DOMAIN_NAME_OF_SERVER_A/proxy?addr=NAME_OF_APP_ON_SE‌​RVER_B&port=PORT to substitute the original http way: http://IP_ADDR_OF_SERVER_B:PORT Is it true? Even though, to achieve this, I am still no so clear on how to implement it? Do I need to add some APIs in the node.js app on Server A? Or do I need to make use of the Nginx on Server B?
  • eawenden
    eawenden over 7 years
    In my original answer you would add a location block for each app above the location / block. I'm afraid a query parameter would not work, because I think nginx will ignore it in the location block. In this solution the traffic would not even hit your nodejs application, so it's hard to make this dynamic. A more simple solutions for your situation might be to use http proxy from inside your NodeJS application. A Google search came up with github.com/nodejitsu/node-http-proxy as a package that might be suitable, but I'm not a NodeJS expert.
  • eawenden
    eawenden over 7 years
    I've edited my answer to make it more general about the idea of using a proxy and included the bit about the NodeJS project.
  • librae
    librae over 7 years
    Thank you eawenden! I got an overall understanding. One more quick question: Since my Server A and Server B communicate with each other in public traffic, it seems not possible to add a reverse proxy on Server A, according to your Warning. So shall I use another domain and certificate for Server B, and do proxy_pass on Server B?
  • librae
    librae over 7 years
    Thank you a lot for the additional instructions. It's a professional anwser which will for sure leads to the correct solution. I think there left some more specific and detailed point for me to think over, that may be beyond the original question. Like what you mentioned, about reverse proxy.
  • librae
    librae over 7 years
    Thanks for your information Alex! I'd take a deeper looking at nginx-proxy and may ask for suggestions later on.
  • eawenden
    eawenden over 7 years
    You can still proxy from application A directly to the apps on server B, because application A already knows about your dynamic apps on server B. You just need to make sure that the connection between A and B is encrypted (https). You don't need to get a paid certificate for this because the end-user never communicates with server B directly, so you can just create a self-signed one for free. You can even set this up with a "fake" domain if you just add it to the hosts file on server A.
  • eawenden
    eawenden over 7 years
    Yeah a wildcard certificate would be a simple solution that will work well. The only thing to solve then is that you can't add ports to your DNS configuration to point to the docker apps directly, so you would need to set up a proxy on server B to direct the incoming traffic to the correct app. You could use github.com/jwilder/nginx-proxy to set this up pretty easily.
  • librae
    librae over 7 years
    Thanks for the detailed instructions!
  • Marcelo
    Marcelo about 2 years
    If one is using Nginx as reverse-proxy, this should be put inside the server block (not inside the location block where proxy_pass is put). By using this header, one would skip editing the app source code.
  • Tejas Tank
    Tejas Tank almost 2 years
    As professional, for anywebsite to fix it from nginx header is right choice.