Back to Blog

How to Make Your Web App Work Behind Corporate Firewalls

A practical guide to implementing server-side proxies in Nuxt 3

How to Make Your Web App Work Behind Corporate Firewalls
Nuxt 3TypeScriptWeb SecurityAPI ProxySSRfirewall

How to Make Your Web App Work Behind Corporate Firewalls

A practical guide to implementing server-side proxies in Nuxt 3


A few weeks ago, I received some puzzling feedback about my portfolio website. Several users reported that pages were loading, but showing no content whatsoever. No blog posts. No CV data. No project cards. Just empty containers with loading spinners that spun indefinitely.

At first, I couldn't reproduce the issue. Everything worked fine on my machine, on my phone, on different browsers. The API was responding correctly, Cloudinary was serving images without problems. What was going on?

Then I noticed the pattern. Every single report came from someone accessing my site from their workplace. Citrix environments. Corporate VPNs. Office networks. That's when it clicked: corporate firewalls were blocking my external API calls.

Understanding the Problem

To understand why this was happening, let's look at how my portfolio was architected. When a user visited my site, here's what their browser was trying to do:

User's Browser → my-api.railway.app (External API)
User's Browser → res.cloudinary.com (External CDN)

This architecture works perfectly fine for most users. But corporate firewalls are designed to be paranoid – and rightfully so. They're configured to block unknown external domains, cross-origin API requests, custom headers like X-API-Key, and connections to cloud services that aren't on an approved whitelist.

From the firewall's perspective, my user's browser was suddenly trying to reach some-random-app.railway.app – a domain the IT department had never heard of. The firewall did exactly what it was supposed to do: it blocked the request.

The frustrating part is that this happens silently. The user doesn't see a "blocked by firewall" message. They just see... nothing. The request times out, the loading spinner keeps spinning, and the page remains empty.

Why Server-Side Rendering Alone Doesn't Fix This

When I first diagnosed this problem, I thought I had an easy solution. My portfolio is built with Nuxt 3, which supports server-side rendering. Surely the data was being fetched on the server during the initial page load, right?

Well, yes and no. Here's the nuance that tripped me up.

With Nuxt's useFetch() composable, data is fetched server-side during the initial page render. When a user navigates directly to yoursite.com/blog, the server fetches the blog posts, renders the HTML, and sends a complete page to the browser. The firewall never sees a request to the external API because the server made that request, not the browser.

But here's the catch: client-side navigation bypasses SSR entirely.

When a user clicks a link within your site – say, from the homepage to the blog – Nuxt doesn't do a full page reload. Instead, it performs client-side navigation, which means the browser fetches the data directly. And that's where the firewall steps in and blocks the request.

There's also the image problem. Even if all your data loads server-side, images are always fetched by the browser. When your API returns a blog post with coverImage: "/api/image?url=https%3A%2F%2Fres.cloudinary.com%2F...", the browser has to fetch that image directly from Cloudinary. If the firewall blocks Cloudinary, your users see broken images – or no images at all.

So SSR helps with the initial page load, but it's not a complete solution. I needed something more robust.

The Solution: Route Everything Through Your Domain

The fix is conceptually simple: instead of having the browser call external services directly, route all requests through your own domain.

BEFORE (blocked):
Browser → railway-api.com     ❌ Firewall blocks
Browser → res.cloudinary.com  ❌ Firewall blocks

AFTER (works):
Browser → yourdomain.com/api/proxy/* → railway-api.com     ✅
Browser → yourdomain.com/api/image   → res.cloudinary.com  ✅

The key insight is that the firewall almost certainly allows requests to your domain – otherwise the user wouldn't be able to load your site at all. By creating server-side proxy routes, you can fetch data from external services on the server and pass it through to the browser. The browser only ever talks to your domain, and your server handles the external communication.

This approach has several benefits beyond just bypassing firewalls. Your API keys stay server-side and are never exposed to the browser. You get better control over caching and error handling. And you can transform data before it reaches the client – which becomes important for the image proxy.

Implementation in Nuxt 3

Let me walk you through how I implemented this. We'll need two proxy routes: one for API calls, and one for images.

Creating the API Proxy

The API proxy is a catch-all route that forwards requests to your backend. In Nuxt 3, you create this as a server route:

// server/api/proxy/[...path].ts
import { defineEventHandler, getQuery, createError } from 'h3'

const ALLOWED_PATHS = ['/cv', '/blog', '/projects']

export default defineEventHandler(async (event) => {
  const config = useRuntimeConfig(event)
  const apiUrl = config.public.apiUrl
  const apiKey = config.apiKey // Note: NOT public, stays server-side

  // Extract the path from the URL
  // e.g., /api/proxy/blog/my-post → /blog/my-post
  const path = '/' + (event.context.params?.path || '')

  // Security: Only allow specific endpoints
  // This prevents the proxy from being used to access arbitrary URLs
  const isAllowed = ALLOWED_PATHS.some(allowed =>
    path === allowed || path.startsWith(allowed + '/')
  )

  if (!isAllowed) {
    throw createError({
      statusCode: 403,
      message: 'Endpoint not allowed'
    })
  }

  // Forward query parameters
  const query = getQuery(event)
  const queryString = new URLSearchParams(
    query as Record<string, string>
  ).toString()
  
  const targetUrl = `${apiUrl}${path}${queryString ? '?' + queryString : ''}`

  try {
    const response = await fetch(targetUrl, {
      method: 'GET',
      headers: {
        'Content-Type': 'application/json',
        ...(apiKey && { 'X-API-Key': apiKey })
      }
    })

    if (!response.ok) {
      throw createError({
        statusCode: response.status,
        message: `API responded with ${response.status}`
      })
    }

    const data = await response.json()
    return data
    
  } catch (error) {
    // Don't expose internal error details to the client
    console.error('Proxy error:', error)
    throw createError({
      statusCode: 502,
      message: 'Failed to fetch from upstream API'
    })
  }
})

A few things to note about this implementation. The ALLOWED_PATHS array is crucial for security – you don't want your proxy to become an open relay that can fetch arbitrary URLs. Only whitelist the specific endpoints your frontend actually needs.

The API key is stored in config.apiKey (not config.public.apiKey), which means it's only available server-side. This is important: the whole point of the proxy is to keep secrets off the client.

The error handling wraps internal errors so you don't accidentally leak information about your backend infrastructure to users.

Creating the Image Proxy

The image proxy is similar in concept, but handles binary data instead of JSON:

// server/api/image.ts
import { defineEventHandler, getQuery, createError, setHeader } from 'h3'

const ALLOWED_DOMAINS = ['res.cloudinary.com']

export default defineEventHandler(async (event) => {
  const { url: imageUrl } = getQuery(event)

  // Validate the URL parameter exists
  if (!imageUrl || typeof imageUrl !== 'string') {
    throw createError({
      statusCode: 400,
      message: 'Missing url parameter'
    })
  }

  // Security: Validate the domain before fetching
  // This prevents the proxy from being used to fetch arbitrary images
  let parsedUrl: URL
  try {
    parsedUrl = new URL(imageUrl)
  } catch {
    throw createError({
      statusCode: 400,
      message: 'Invalid URL format'
    })
  }

  const isAllowed = ALLOWED_DOMAINS.some(domain =>
    parsedUrl.hostname === domain || 
    parsedUrl.hostname.endsWith('.' + domain)
  )

  if (!isAllowed) {
    throw createError({
      statusCode: 403,
      message: 'Domain not allowed'
    })
  }

  try {
    const response = await fetch(imageUrl)

    if (!response.ok) {
      throw createError({
        statusCode: response.status,
        message: 'Failed to fetch image'
      })
    }

    // Preserve the content type from the original response
    const contentType = response.headers.get('content-type') || 'image/jpeg'
    
    // Set appropriate headers for image caching
    // Images rarely change, so we can cache aggressively
    setHeader(event, 'Content-Type', contentType)
    setHeader(event, 'Cache-Control', 'public, max-age=31536000, immutable')

    // Return the image as a buffer
    const arrayBuffer = await response.arrayBuffer()
    return Buffer.from(arrayBuffer)
    
  } catch (error) {
    console.error('Image proxy error:', error)
    throw createError({
      statusCode: 502,
      message: 'Failed to fetch image'
    })
  }
})

The domain whitelist is especially important for the image proxy. Without it, attackers could use your server as an anonymous proxy to fetch content from anywhere on the internet. Only allow domains you actually use, like your CDN.

I added a try-catch around new URL() because malformed URLs will throw an exception. It's a small detail, but it prevents your server from crashing on bad input.

The Cache-Control header with max-age=31536000 (one year) and immutable tells browsers and CDNs to cache these images aggressively. Since Cloudinary URLs typically include content hashes, the same URL will always return the same image, making aggressive caching safe and effective.

The Magic: Auto-Replacing Image URLs

Here's where things get clever. Your API returns data with Cloudinary URLs embedded throughout – in cover images, in rich text content, in thumbnails. Instead of updating every component to handle proxied URLs, you can automatically replace all Cloudinary URLs in API responses before they reach the client.

// utils/processCloudinaryUrls.ts

const CLOUDINARY_PATTERN = /https?:\/\/res\.cloudinary\.com\/[^"'\s)]+/g

export function processCloudinaryUrls(data: unknown): unknown {
  // Handle strings: replace all Cloudinary URLs
  if (typeof data === 'string') {
    return data.replace(CLOUDINARY_PATTERN, (url) =>
      `/api/image?url=${encodeURIComponent(url)}`
    )
  }

  // Handle arrays: process each element
  if (Array.isArray(data)) {
    return data.map(processCloudinaryUrls)
  }

  // Handle objects: process each value
  if (typeof data === 'object' && data !== null) {
    const processed: Record<string, unknown> = {}
    for (const key in data) {
      if (Object.prototype.hasOwnProperty.call(data, key)) {
        processed[key] = processCloudinaryUrls(
          (data as Record<string, unknown>)[key]
        )
      }
    }
    return processed
  }

  // Return primitives unchanged
  return data
}

This recursive function walks through your entire API response and replaces every Cloudinary URL with the proxied version. It handles URLs in object properties, in arrays, in nested structures, and even in strings (which catches URLs embedded in rich text content from TipTap or similar editors).

Now update your API proxy to use this function:

// In server/api/proxy/[...path].ts

import { processCloudinaryUrls } from '~/utils/processCloudinaryUrls'

// ... inside your event handler, after fetching data:

const data = await response.json()
return processCloudinaryUrls(data)

With this in place, your frontend code doesn't need to change at all. Components that expect image URLs will receive proxied URLs automatically.

Updating Your Pages

The final step is updating your pages to use the proxy instead of calling the external API directly:

// Before: Direct API call (blocked by firewalls)
const { data } = await useFetch('https://my-api.railway.app/blog')

// After: Proxied through your domain (works everywhere)
const { data } = await useFetch('/api/proxy/blog')

Since the proxy adds the API key automatically, you don't need to include authentication in your frontend code. This is cleaner and more secure.

Bonus: Improving the Navigation Experience

While implementing the proxy, I also took the opportunity to improve how my portfolio handles navigation. If you're using useFetch() with the default settings, navigating between pages can feel sluggish because Nuxt waits for the data to load before rendering the new page.

Lazy Loading for Instant Navigation

By adding lazy: true, the page renders immediately with a loading state, and data populates when it's ready:

// Before: Navigation blocked until data loads
const { data } = await useFetch('/api/proxy/blog')

// After: Page renders immediately, data loads in background
const { data, pending } = useFetch('/api/proxy/blog', {
  lazy: true
})

In your template, you can show a skeleton loader while data is loading:

<template>
  <div v-if="pending" class="space-y-4">
    <div class="h-48 bg-zinc-800 rounded-lg animate-pulse" />
    <div class="h-48 bg-zinc-800 rounded-lg animate-pulse" />
    <div class="h-48 bg-zinc-800 rounded-lg animate-pulse" />
  </div>
  
  <div v-else class="space-y-4">
    <BlogCard 
      v-for="post in data" 
      :key="post.id" 
      :post="post" 
    />
  </div>
</template>

Preventing Layout Shift

One common issue with loading states is layout shift – the footer jumps up when content is loading, then snaps back down when content appears. You can prevent this by setting a minimum height on the loading container:

<div :class="pending ? 'min-h-[80vh]' : ''">
  <!-- Content or skeleton -->
</div>

This ensures the page layout remains stable during the loading phase.

Security Considerations

When implementing a proxy, security should be top of mind. You're essentially allowing your server to make requests on behalf of clients, which can be abused if not properly restricted.

Whitelist endpoints aggressively. Only proxy the specific API paths your frontend actually needs. If your frontend only calls /blog and /projects, don't allow /admin or /internal through the proxy.

Whitelist domains for images. The image proxy should only allow your known CDN domains. Without this restriction, attackers could use your server to fetch arbitrary content, potentially for scraping, bypassing rate limits, or hiding their identity.

Keep secrets server-side. The proxy adds API keys automatically, so they never appear in client-side code or network requests visible in browser dev tools.

Consider rate limiting. If your site gets significant traffic, you might want to add rate limiting to prevent abuse. Nuxt doesn't include built-in rate limiting, but you can implement it with middleware or use a service like Cloudflare.

Log and monitor. Keep an eye on proxy usage to detect unusual patterns. A sudden spike in image proxy requests might indicate someone is trying to use your server as a CDN proxy.

The Results

After deploying the server-side proxy, I reached out to the users who had originally reported issues. Every single one confirmed that the site now works correctly from their corporate networks.

Beyond fixing the firewall issue, the proxy architecture brought several improvements. API keys are now completely hidden from the browser – you can't see them in network requests or client-side code. Image caching is more consistent because all images go through a single endpoint with predictable cache headers. And the separation between client and server concerns is cleaner overall.

The performance impact is minimal. Proxied requests add a small amount of latency since they make an extra hop through your server, but for most applications this is negligible – especially compared to the alternative of having no data load at all.

Conclusion

If your users might access your web application from corporate environments, a server-side proxy isn't just nice to have – it's essential. The implementation is straightforward: create a proxy route for your API calls, create another for images, auto-replace external URLs in responses, and update your pages to use the proxy endpoints.

The result is a web application that works for everyone, regardless of their network restrictions. And as a bonus, you get better security, cleaner architecture, and more control over caching and error handling.


Have you encountered similar issues with corporate firewalls? I'd love to hear about your solutions – reach out on LinkedIn or leave a message!