Server Networking Overview - Current State
With multiple projects running simultaneously on my development server, the networking landscape has grown more complex. This post documents the current state of all running services, their port allocations, and how they interconnect. This is primarily for my own reference as I consider improving the reverse proxy architecture.
Architecture Overview
The server currently runs a mix of different application types:
- Web Applications: Two main sites (production Flask blog and development Next.js app)
- Microservices: Docker Swarm-based arbitrage trading system
- Development Tools: Firebase emulators for local development
- Database Services: MongoDB, MySQL with phpMyAdmin
- Gaming: Minecraft server
- Infrastructure: Nginx reverse proxy, FTP, custom SSH port
Network Architecture Diagram
Service Inventory
Web Serving Layer
| Service | Port | Purpose | Status |
|---|---|---|---|
| Nginx | 80, 443 | Reverse proxy for gabrielpenman.com with SSL termination | Running (systemd) |
| Flask Blog | 5000 | Production blog backend (blog-server.service) |
Running (systemd) |
| Next.js geo-butler | 3000 | Development/experimental SEO tool (geo-butler-dev.service) |
Running (systemd) |
Firebase Development Emulators
Running as part of the geo-butler development environment:
| Emulator | Port | Purpose |
|---|---|---|
| Firebase UI | 4000 | Web interface for emulator suite |
| Cloud Functions | 5001 | Serverless function emulation |
| Hosting | 5002 | Firebase hosting emulation |
| Firestore | 8081 | NoSQL database emulation |
| Authentication | 9099 | Auth service emulation |
Arbitrage Trading Microservices
Docker Swarm-based microservice architecture for cryptocurrency arbitrage:
| Service | Port | Container | Purpose |
|---|---|---|---|
| Webapp | 3005 | arbitrage_webapp | Web interface |
| Pool | 7500 | arbitrage_pool | Liquidity pool monitoring |
| Registry | 7503 | arbitrage_registry | Service registry and discovery |
| Config | 7504 | arbitrage_config | Configuration management |
| Telegram | 7505 | arbitrage_telegram | Telegram bot integration |
| Network | 8888-8889 | arbitrage_network | Network monitoring |
Database Services
| Service | Port | Purpose | Access |
|---|---|---|---|
| MongoDB | 27017 | Primary database for arbitrage services | Docker container (mongo:7.0) |
| MySQL | 3306 | WordPress database | Docker container (internal) |
| phpMyAdmin | 8080 | MySQL admin interface | http://localhost:8080 |
Other Services
| Service | Port | Purpose |
|---|---|---|
| Minecraft Server | 25565 | Multiplayer gaming server (Java edition) |
| BlueMap (Dynmap) | 8100 | Live Minecraft world map renderer |
| Depopper Research | 5123 | Experimental image processing pipeline playground |
| SSH | 2222 | Secure shell access (custom port) |
| FTP | 21 | File transfer protocol |
| DNS | 53 | Local DNS resolution (systemd-resolved) |
Nginx Reverse Proxy Routes
Many services are exposed through Nginx using /dev/* paths, allowing web access to internal services without direct port exposure:
| Public Path | Proxies To | Service |
|---|---|---|
/dev/3000 |
localhost:3000 | Next.js geo-butler development site |
/dev/3005 |
localhost:3005 | Arbitrage webapp dashboard |
/dev/4000 |
localhost:4000 | Firebase Emulator UI |
/dev/5001 |
localhost:5001 | Firebase Cloud Functions |
/dev/5123 |
localhost:5123 | Depopper research playground |
/dev/8081 |
localhost:8081 | Firebase Firestore emulator |
/dev/8100 |
[::1]:8100 | BlueMap (Minecraft live map) |
/dev/9099 |
localhost:9099 | Firebase Authentication emulator |
/dev/api/ |
localhost:3005/api/ | Arbitrage API endpoints |
This pattern allows accessing services like https://gabrielpenman.com/dev/3005 instead of opening additional firewall ports.
Traffic Flow Patterns
Public Web Traffic (gabrielpenman.com)
- HTTPS requests hit Nginx on port 443
- Nginx routes
/api/*paths to Flask backend (port 5000) - Static content served from
/home/gabriel/blog-new/output/ /dev/*paths proxy to internal services:/dev/3000→ Next.js geo-butler/dev/3005→ Arbitrage webapp/dev/4000, 5001, 8081, 9099→ Firebase emulators/dev/8100→ BlueMap (Minecraft map)/dev/5123→ Depopper research playground
Arbitrage System Communication
- All services run in Docker Swarm for orchestration
- Internal service discovery via the registry service (port 7503)
- Configuration pulled from config service (port 7504)
- Pool and registry services share MongoDB on port 27017
- Telegram bot provides external notifications
- Web dashboard accessible via
/dev/3005proxy path
Development Workflow
- Next.js dev server runs on port 3000, accessible via
/dev/3000 - Firebase emulators provide backend services locally (all proxied via
/dev/*) - No need to deploy to Firebase for testing
- Auth, Firestore, Functions all emulated locally and web-accessible
Gaming & Visualization
- Minecraft server listens on port 25565 for game clients
- BlueMap plugin generates live world map on port 8100
- Map accessible via web at
/dev/8100
Current Issues & Observations
- Port sprawl: 20+ ports in active use across the server
- Mixed orchestration: Systemd services alongside Docker Swarm
- Microservice exposure: Arbitrage microservices still use raw ports instead of
/dev/*proxying - Documentation lag: Hard to track what's running without port scanning
- Docker proxy duplication: Both standalone and Swarm MongoDB instances
- Dev path proliferation: 10+ different
/dev/*proxy endpoints to manage
Port Allocation Summary
Web Serving: 80, 443, 3000, 5000
Firebase Emulators: 4000, 5001, 5002, 8081, 9099
Arbitrage Services: 3005, 7500, 7503-7505, 8888-8889
Databases: 3306, 8080, 27017
Gaming: 25565, 8100 (Minecraft + BlueMap)
Research: 5123 (Depopper playground)
Infrastructure: 21, 53, 2222
Docker Swarm: 2377, 7946
Proxied via Nginx: 3000, 3005, 4000, 5001, 5123, 8081, 8100, 9099
Direct Access Only: 7500, 7503-7505, 8888-8889 (microservices)
Next Steps
This documentation serves as a baseline for future improvements. Potential areas to explore:
- Consolidating more services behind Nginx reverse proxy
- Standardizing on Docker Compose or Swarm for all services
- Implementing proper service discovery
- Adding monitoring and health checks
- Documenting API endpoints and inter-service communication
For now, this gives me a clear picture of what's running where, which is exactly what I needed.