How to Build a Production-Grade gRPC Service in Go: A Step-by-Step Guide
REST APIs still get the job done, but when real-time data throughput and ultra-low latency are non-negotiable requirements — centralized logging, metrics collection, IoT telemetry — the industry standard has firmly shifted to gRPC. Google, Netflix, Spotify, and Cloudflare all run gRPC for internal service-to-service communication. So why isn't REST enough, and how do you build your own production-grade gRPC service in Go?
In this hands-on guide, we'll use EresusLog, an open-source project by Eresus Security, as a real-world reference to build a complete gRPC logging service from zero. We'll cover everything: Protobuf definitions, database persistence with PostgreSQL, authentication interceptors, request logging, rate limiting, and health checks.
1. Why gRPC Over REST?
| Feature | REST (JSON/HTTP) | gRPC (Protobuf/HTTP/2) | | :--- | :--- | :--- | | Data Format | JSON (text, verbose) | Protocol Buffers (binary, compact) | | Serialization Speed | Slow (text parsing) | Lightning fast (binary encoding) | | Streaming | Not native (requires WebSockets) | Native support (4 patterns) | | Type Safety | None at transport layer | Compile-time contracts | | HTTP Version | HTTP/1.1 | HTTP/2 (multiplexing, header compression) |
For inter-service communication where milliseconds directly translate to operational cost and system reliability, gRPC is the only correct answer.
2. Step 1: Define Your Service with Protobuf
Every gRPC project begins with a .proto file. This single file defines both your data structures (messages) and your API methods (service):
syntax = "proto3";
package logger;
option go_package = "github.com/EresusSecurity/eresuslog/api/proto;logger";
service LoggerService {
rpc Log(LogRequest) returns (LogResponse) {} // Unary
rpc StreamLogs(stream LogRequest) returns (LogResponse) {} // Client-stream
rpc FetchLogs(FetchRequest) returns (FetchResponse) {} // Unary query
rpc SubscribeLogs(SubscribeRequest) returns (stream LogEntry) {} // Server-stream
}
message LogRequest {
string service_name = 1;
string level = 2;
string message = 3;
int64 timestamp = 4;
map<string, string> metadata = 5;
}
Notice: four distinct RPC patterns coexist cleanly under a single service — Unary, Client-Streaming, Server-Streaming, and Bidirectional if needed.
Generate Go code:
protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
api/proto/logger.proto
This produces logger.pb.go (data structures) and logger_grpc.pb.go (service interfaces) automatically.
3. Step 2: Database Layer (PostgreSQL + GORM)
We use GORM to manage our PostgreSQL persistence layer. The model is intentionally minimal:
// internal/db/models.go
type Log struct {
gorm.Model
ServiceName string `gorm:"index"`
Level string `gorm:"index"`
Message string
Timestamp time.Time
Metadata string // JSON encoded
}
The repository pattern cleanly isolates all database operations:
// internal/db/repository.go
func (r *Repository) SaveLog(ctx context.Context, serviceName, level, message string,
timestamp int64, metadata map[string]string) error {
metadataJson, _ := json.Marshal(metadata)
log := &Log{
ServiceName: serviceName,
Level: level,
Message: message,
Timestamp: time.Unix(timestamp, 0),
Metadata: string(metadataJson),
}
return r.db.WithContext(ctx).Create(log).Error
}
4. Step 3: Implementing the gRPC Server
We build on top of UnimplementedLoggerServiceServer. Here's the core of the real-time pub/sub mechanism:
type LoggerServer struct {
pb.UnimplementedLoggerServiceServer
repo *db.Repository
subscribers []chan *pb.LogEntry
mu sync.RWMutex
}
func (s *LoggerServer) Log(ctx context.Context, req *pb.LogRequest) (*pb.LogResponse, error) {
err := s.repo.SaveLog(ctx, req.ServiceName, req.Level, req.Message,
req.Timestamp, req.Metadata)
if err != nil {
return &pb.LogResponse{Success: false, Message: err.Error()}, nil
}
// Broadcast to all active subscribers
s.broadcast(&pb.LogEntry{
ServiceName: req.ServiceName,
Level: req.Level,
Message: req.Message,
Timestamp: req.Timestamp,
Metadata: req.Metadata,
})
return &pb.LogResponse{Success: true, Message: "Log saved"}, nil
}
The SubscribeLogs method allows any client to subscribe to a filtered, real-time log feed — like tail -f, but over the network with full type safety.
5. Step 4: The Interceptor Chain (Security Layers)
The most powerful concept in gRPC is the Interceptor — the equivalent of middleware in REST frameworks. In EresusLog, we chain three layers:
5.1 Request Logger Interceptor
Logs every incoming call with the client's IP, the invoked method, response status, and execution latency:
func (r *RequestLoggerInterceptor) Unary() grpc.UnaryServerInterceptor {
return func(ctx context.Context, req interface{},
info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
start := time.Now()
resp, err := handler(ctx, req)
clientIP := extractClientIP(ctx)
st, _ := status.FromError(err)
log.Printf("[gRPC] %s | %s | %s | %v",
info.FullMethod, clientIP, st.Code(), time.Since(start))
return resp, err
}
}
5.2 Rate Limiter Interceptor
Per-IP sliding window rate limiting to block log spam and abuse:
rl := server.NewRateLimiter(100, 10*time.Second) // 100 req/10s per IP
5.3 Auth Interceptor (API Key)
Extracts and validates the Authorization: Bearer <key> token from gRPC metadata:
func (i *AuthInterceptor) authorize(ctx context.Context) error {
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return status.Errorf(codes.Unauthenticated, "metadata is not provided")
}
values := md["authorization"]
if len(values) == 0 {
return status.Errorf(codes.Unauthenticated, "token is missing")
}
token := strings.TrimPrefix(values[0], "Bearer ")
if token != i.validAPIKey {
return status.Errorf(codes.Unauthenticated, "invalid API key")
}
return nil
}
Chain them together in main.go:
s := grpc.NewServer(
grpc.ChainUnaryInterceptor(
reqLogger.Unary(),
rateLimiter.UnaryInterceptor(),
authInterceptor.Unary(),
),
grpc.ChainStreamInterceptor(
reqLogger.Stream(),
rateLimiter.StreamInterceptor(),
authInterceptor.Stream(),
),
)
6. Step 5: Health Checks for Production
Any gRPC service running behind Kubernetes or an AWS ALB must expose a health check endpoint:
healthServer := health.NewServer()
healthpb.RegisterHealthServer(s, healthServer)
healthServer.SetServingStatus("logger.LoggerService",
healthpb.HealthCheckResponse_SERVING)
Verify with:
grpcurl -plaintext localhost:50051 grpc.health.v1.Health/Check
7. Get the Full Source Code
Every line of code in this guide is available as a fully working, open-source project. Clone it, run it, and build on it:
git clone https://github.com/EresusSecurity/eresuslog.git
cd eresuslog
cp .env.example .env
go run cmd/server/main.go
Star the project on GitHub (⭐) to support development, and feel free to contribute!
If your team needs expert DevSecOps consulting, AI-powered penetration testing, or autonomous security agents built for your infrastructure, reach out to the engineering team at Eresus Security.