Skip to content

Go

This guide walks through testing a Go application with Stove --- end to end, including HTTP, PostgreSQL, Kafka, distributed tracing, and the dashboard. The Go app is a simple product CRUD service; Stove starts it as an OS process, passes infrastructure configs as environment variables, and runs Kotlin e2e tests against it.

The full source is at recipes/process/golang/go-showcase.

Project Structure

go-showcase/                   # Standalone Gradle project (copy-paste ready)
  main.go                      # Entry point, env var config, graceful shutdown
  db.go                        # PostgreSQL queries (auto-traced via otelsql)
  handlers.go                  # HTTP handlers + Kafka publish (auto-traced via otelhttp)
  kafka.go                     # KafkaProducer interface, factory, shared consumer handler
  kafka_sarama.go              # IBM/sarama implementation
  kafka_franz.go               # twmb/franz-go implementation
  kafka_segmentio.go           # segmentio/kafka-go implementation
  tracing.go                   # OpenTelemetry SDK initialization
  go.mod
  stovetests/                  # Kotlin Stove tests
    kotlin/com/.../e2e/
      setup/
        StoveConfig.kt              # Stove system configuration (uses stove-process goApp())
        ProductMigration.kt         # Creates products table
      tests/
        GoShowcaseTest.kt           # E2E tests
    resources/
      kotest.properties
  build.gradle.kts             # Builds Go + runs Kotlin tests
  settings.gradle.kts          # Standalone settings with version catalog

# Distributed as a Go library:
go/stove-kafka/            # Stove Kafka bridge for Go applications
  bridge.go                    # Core bridge (library-agnostic gRPC client)
  sarama/                      # IBM/sarama interceptors
    interceptors.go
  franz/                       # twmb/franz-go hooks
    hooks.go
  segmentio/                     # segmentio/kafka-go helpers
    bridge.go
  stoveobserver/               # Generated gRPC code from messages.proto
  go.mod

The Go Application

A minimal HTTP + PostgreSQL service. The key design choice: all tracing is in the infrastructure layer, not in business logic.

Entry Point

main.go
func main() {
    ctx := context.Background()
    port := getEnv("APP_PORT", "8080")

    // Initialize OTel tracing (no-ops gracefully if endpoint not set)
    shutdownTracing, _ := initTracing(ctx, "go-showcase")
    defer shutdownTracing(ctx)

    db, _ := initDB(connStr)  // otelsql wraps database/sql automatically
    defer db.Close()

    // Initialize Stove Kafka bridge (nil in production — zero overhead)
    bridge, _ := stovekafka.NewBridgeFromEnv()
    defer bridge.Close()

    // Initialize Kafka producer and consumer (library chosen by KAFKA_LIBRARY env var)
    kafkaLibrary := getEnv("KAFKA_LIBRARY", "sarama")
    producer, stopKafka, _ := initKafka(kafkaLibrary, brokers, db, bridge)
    defer stopKafka()

    mux := http.NewServeMux()
    registerRoutes(mux, db, producer)

    // otelhttp middleware creates spans for every HTTP request
    handler := otelhttp.NewHandler(mux, "http.request")

    server := &http.Server{Addr: ":" + port, Handler: handler}
    // ... graceful shutdown on SIGTERM
}

Configuration comes entirely from environment variables:

Variable Purpose Default
APP_PORT HTTP listen port 8080
DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASS PostgreSQL connection localhost, 5432, stove, sa, sa
OTEL_EXPORTER_OTLP_ENDPOINT OTLP gRPC endpoint for traces (disabled if empty)
KAFKA_BROKERS Comma-separated Kafka broker addresses (disabled if empty)
KAFKA_LIBRARY Kafka client library to use: sarama, franz, or segmentio sarama
STOVE_KAFKA_BRIDGE_PORT Stove Kafka bridge gRPC port (disabled if empty, test-only)
GOCOVERDIR Directory for Go integration test coverage data (disabled if empty, test-only)

Handlers

Handlers are pure business logic --- no tracing imports:

handlers.go
func handleCreateProduct(db *sql.DB, producer KafkaProducer) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        var req createProductRequest
        if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
            http.Error(w, `{"error":"invalid request body"}`, http.StatusBadRequest)
            return
        }

        product := Product{ID: uuid.New().String(), Name: req.Name, Price: req.Price}

        if err := insertProduct(r.Context(), db, product); err != nil {
            http.Error(w, `{"error":"failed to create product"}`, http.StatusInternalServerError)
            return
        }

        // Publish ProductCreatedEvent to Kafka (works with any library)
        if producer != nil {
            event := ProductCreatedEvent{ID: product.ID, Name: product.Name, Price: product.Price}
            eventBytes, _ := json.Marshal(event)
            producer.SendMessage(topicProductCreated, product.ID, eventBytes)
        }

        writeJSON(w, http.StatusCreated, product)
    }
}

The KafkaProducer interface abstracts away the Kafka client library:

kafka.go
type KafkaProducer interface {
    SendMessage(topic, key string, value []byte) error
    Close() error
}

func initKafka(library, brokers string, db *sql.DB, bridge *stovekafka.Bridge) (KafkaProducer, func(), error) {
    groupID := "go-showcase-" + library
    switch library {
    case "franz":
        return initFranzKafka(brokers, groupID, db, bridge)
    case "segmentio":
        return initSegmentioKafka(brokers, groupID, db, bridge)
    default:
        return initSaramaKafka(brokers, groupID, db, bridge)
    }
}

Notice: r.Context() is passed to the DB function. This is standard Go practice, and it's all that's needed for trace propagation --- the otelhttp middleware puts a span in the context, and otelsql creates child spans from it.

Database

Database functions are equally clean --- no tracing boilerplate:

db.go
func initDB(connStr string) (*sql.DB, error) {
    // otelsql wraps database/sql --- all queries are automatically traced
    db, err := otelsql.Open("postgres", connStr,
        otelsql.WithAttributes(semconv.DBSystemPostgreSQL),
    )
    // ...
}

func insertProduct(ctx context.Context, db *sql.DB, p Product) error {
    _, err := db.ExecContext(ctx,
        "INSERT INTO products (id, name, price) VALUES ($1, $2, $3)",
        p.ID, p.Name, p.Price,
    )
    return err
}

Tracing Setup

The OTel SDK initialization is the only place with tracing imports:

tracing.go
func initTracing(ctx context.Context, serviceName string) (func(context.Context), error) {
    endpoint := os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT")
    if endpoint == "" {
        return func(context.Context) {}, nil  // Graceful no-op
    }

    conn, _ := grpc.NewClient(endpoint, grpc.WithTransportCredentials(insecure.NewCredentials()))
    exporter, _ := otlptracegrpc.New(ctx, otlptracegrpc.WithGRPCConn(conn))

    tp := sdktrace.NewTracerProvider(
        sdktrace.WithSyncer(exporter),   // Sync export for tests (no batching delay)
        sdktrace.WithResource(resource.NewWithAttributes(
            semconv.SchemaURL,
            semconv.ServiceNameKey.String(serviceName),
        )),
    )

    otel.SetTracerProvider(tp)
    otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(
        propagation.TraceContext{},  // W3C traceparent
        propagation.Baggage{},
    ))

    return func(ctx context.Context) { tp.Shutdown(ctx) }, nil
}

Sync vs Batch Exporter

Use WithSyncer(exporter) for tests so spans are exported immediately when they end. In production, use WithBatcher(exporter) for better performance. The 5-second default batch interval would cause test assertions to fail because spans wouldn't arrive in time.

W3C Trace Context Propagation

Setting propagation.TraceContext{} is essential. Stove's HTTP client sends a traceparent header with each request. The otelhttp middleware extracts it, so all spans in the Go app share the same trace ID as the test. This is what makes tracing { shouldContainSpan(...) } assertions work.

Kafka

Stove provides a Go bridge library (stove-kafka) that enables shouldBeConsumed and shouldBePublished assertions for Go applications. The bridge forwards produced/consumed messages via gRPC to Stove's StoveKafkaObserverGrpcServer. The core is library-agnostic; client-specific subpackages provide interceptors/hooks for popular Go Kafka libraries:

Library Subpackage Integration
IBM/sarama sarama ProducerInterceptor / ConsumerInterceptor
twmb/franz-go franz kgo.WithHooks(&franz.Hook{...})
segmentio/kafka-go segmentio segmentio.ReportWritten() / segmentio.ReportRead()

Using other Kafka libraries (e.g. confluent-kafka-go)

The subpackages above are conveniences. The core bridge (PublishedMessage, ConsumedMessage, Bridge) has no Kafka client dependency. For any library not listed above, import only the core package and call bridge.ReportPublished(), bridge.ReportConsumed(), and bridge.ReportCommitted() directly with your own type conversion:

import stovekafka "github.com/trendyol/stove/go/stove-kafka"

bridge, _ := stovekafka.NewBridgeFromEnv()

// After producing
_ = bridge.ReportPublished(ctx, &stovekafka.PublishedMessage{
    Topic: msg.Topic, Key: string(msg.Key), Value: msg.Value,
    Headers: myHeaders(msg),
})

// After consuming
_ = bridge.ReportConsumed(ctx, &stovekafka.ConsumedMessage{
    Topic: msg.Topic, Key: string(msg.Key), Value: msg.Value,
    Partition: msg.Partition, Offset: msg.Offset,
    Headers: myHeaders(msg),
})
_ = bridge.ReportCommitted(ctx, msg.Topic, msg.Partition, msg.Offset+1)

How It Works

sequenceDiagram
    participant App as Go App (Sarama)
    participant PI as ProducerInterceptor
    participant CI as ConsumerInterceptor
    participant Bridge as Stove Bridge (gRPC)
    participant Observer as StoveKafkaObserverGrpcServer

    App->>PI: SendMessage()
    PI->>Bridge: ReportPublished(msg)
    Bridge->>Observer: onPublishedMessage (gRPC)
    Note over Observer: shouldBePublished ✓

    App->>CI: ConsumeMessage()
    CI->>Bridge: ReportConsumed(msg)
    CI->>Bridge: ReportCommitted(topic, partition, offset+1)
    Bridge->>Observer: onConsumedMessage + onCommittedMessage (gRPC)
    Note over Observer: shouldBeConsumed ✓

In production, STOVE_KAFKA_BRIDGE_PORT is not set, so NewBridgeFromEnv() returns nil. All Bridge methods are nil-safe no-ops --- zero overhead.

Integrating the Bridge (Step by Step)

Follow these steps to add Stove Kafka support to your Go application:

Step 1: Add the dependency

go get github.com/trendyol/stove/go/stove-kafka

Step 2: Initialize the bridge in your app

import stovekafka "github.com/trendyol/stove/go/stove-kafka"

// Returns nil when STOVE_KAFKA_BRIDGE_PORT is not set (production mode)
bridge, err := stovekafka.NewBridgeFromEnv()
if err != nil {
    log.Fatalf("failed to init stove bridge: %v", err)
}
defer bridge.Close()

Step 3: Wire the bridge into your Kafka client

Choose the tab matching your Go Kafka library:

kafka.go
import stovesarama "github.com/trendyol/stove/go/stove-kafka/sarama"

config := sarama.NewConfig()
config.Producer.Interceptors = []sarama.ProducerInterceptor{
    &stovesarama.ProducerInterceptor{Bridge: bridge},
}
config.Consumer.Interceptors = []sarama.ConsumerInterceptor{
    &stovesarama.ConsumerInterceptor{Bridge: bridge},
}
kafka.go
import "github.com/trendyol/stove/go/stove-kafka/franz"

client, err := kgo.NewClient(
    kgo.SeedBrokers("localhost:9092"),
    kgo.WithHooks(&franz.Hook{Bridge: bridge}),
)
kafka.go
import "github.com/trendyol/stove/go/stove-kafka/segmentio"

// After producing
err := writer.WriteMessages(ctx, msgs...)
segmentio.ReportWritten(ctx, bridge, msgs...)

// After consuming
msg, err := reader.ReadMessage(ctx)
segmentio.ReportRead(ctx, bridge, msg)

When Bridge is nil (production), all interceptors/helpers return immediately with zero overhead.

Step 4: Add stoveKafka to your Gradle test dependencies

build.gradle.kts
dependencies {
    testImplementation(stoveLibs.stoveKafka)
}

Step 5: Register the Kafka system in your Stove config

StoveConfig.kt
kafka {
    KafkaSystemOptions(
        configureExposedConfiguration = { cfg ->
            listOf("kafka.bootstrapServers=${cfg.bootstrapServers}")
        }
    )
}

Pass the bridge port and broker address to your Go app via configMapper:

map["kafka.bootstrapServers"]?.let { put("KAFKA_BROKERS", it) }
put("STOVE_KAFKA_BRIDGE_PORT", stoveKafkaBridgePortDefault)

Step 6: Write Kafka assertions in your tests

kafka {
    shouldBePublished<MyEvent>(10.seconds) { actual.id == expectedId }
    shouldBeConsumed<MyEvent>(10.seconds) { actual.id == expectedId }
    publish("my.topic", MyEvent(id = "123", name = "test"))
}

Commit Reporting

Sarama lacks an onCommit() interceptor. The ConsumerInterceptor.OnConsume() pre-reports the committed offset (offset+1) along with the consumed message. This satisfies Stove's shouldBeConsumed assertion, which checks isCommitted(topic, offset, partition) requiring committed.offset >= consumed.offset + 1.

Test-Friendly Kafka Settings

When running against Testcontainers (e.g. in Stove e2e tests), configure your Kafka clients for fast feedback:

  • Auto-create topics — the test container may not have topics pre-created
  • Small batch size / low batch timeout — flush produces immediately
  • Short auto-commit interval — make consumed offsets visible to Stove quickly
config := sarama.NewConfig()
config.Producer.Return.Successes = true
config.Consumer.Offsets.Initial = sarama.OffsetOldest
config.Consumer.Offsets.AutoCommit.Interval = 100 * time.Millisecond
kgo.AllowAutoTopicCreation(),
kgo.AutoCommitInterval(100 * time.Millisecond),
kgo.ConsumeResetOffset(kgo.NewOffset().AtStart()),
// Writer
writer := &kafka.Writer{
    BatchSize:              1,
    BatchTimeout:           10 * time.Millisecond,
    AllowAutoTopicCreation: true,
}

// Reader
reader := kafka.NewReader(kafka.ReaderConfig{
    CommitInterval: 100 * time.Millisecond,
    MaxWait:        500 * time.Millisecond,
})

Production vs Test settings

These aggressive settings are optimized for test speed, not throughput. In production, use larger batch sizes, longer commit intervals, and broker-managed topic creation.

Consumer Groups

Each Kafka library run uses a unique consumer group ID ("go-showcase-" + library) to prevent offset carryover between sequential test runs. This ensures each library starts consuming from the beginning of the topic.

Go Dependencies

go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp  # HTTP middleware
go.opentelemetry.io/otel                                        # OTel API
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc # OTLP gRPC exporter
go.opentelemetry.io/otel/sdk                                    # OTel SDK
github.com/XSAM/otelsql                                         # database/sql auto-instrumentation
github.com/lib/pq                                                # PostgreSQL driver
google.golang.org/grpc                                           # gRPC (for OTLP + bridge)

# Kafka — pick one client + its bridge subpackage:
github.com/IBM/sarama                                            # + stove-kafka/sarama
github.com/twmb/franz-go/pkg/kgo                                 # + stove-kafka/franz
github.com/segmentio/kafka-go                                    # + stove-kafka/segmentio
github.com/trendyol/stove/go/stove-kafka                        # Core bridge (always needed)

Stove Test Setup

Gradle Build

Gradle compiles the Go binary and passes its path to the test:

build.gradle.kts
val goBinary = layout.buildDirectory.file("go-app").get().asFile
val goExecutable = providers.environmentVariable("GO_EXECUTABLE").getOrElse("go")
val coverageEnabled = providers.gradleProperty("go.coverage")
    .map { it.toBoolean() }.getOrElse(false)

tasks.register<Exec>("buildGoApp") {
    description = "Compiles the Go application."
    group = "build"
    val args = mutableListOf(goExecutable, "build")
    if (coverageEnabled) args.add("-cover")
    args.addAll(listOf("-o", goBinary.absolutePath, "."))
    commandLine(args)
    inputs.files(fileTree(".") { include("*.go", "go.mod", "go.sum") })
    outputs.file(goBinary)
}

// Per-library e2e test tasks — each passes KAFKA_LIBRARY to the Go app
val kafkaLibraries = listOf("sarama", "franz", "segmentio")
val kafkaE2eTasks = kafkaLibraries.mapIndexed { index, lib ->
    tasks.register<Test>("e2eTest_$lib") {
        dependsOn("buildGoApp")
        systemProperty("go.app.binary", goBinary.absolutePath)
        systemProperty("kafka.library", lib)
        // Run sequentially: sarama → franz → segmentio
        if (index > 0) mustRunAfter("e2eTest_${kafkaLibraries[index - 1]}")
    }
}
// `e2eTest` runs all three
tasks.named<Test>("e2eTest") { dependsOn(kafkaE2eTasks); enabled = false }

dependencies {
    testImplementation(stoveLibs.stove)
    testImplementation(stoveLibs.stoveProcess)
    testImplementation(stoveLibs.stovePostgres)
    testImplementation(stoveLibs.stoveHttp)
    testImplementation(stoveLibs.stoveTracing)
    testImplementation(stoveLibs.stoveDashboard)
    testImplementation(stoveLibs.stoveKafka)
    testImplementation(stoveLibs.stoveExtensionsKotest)
}

Running ./gradlew e2eTest compiles the Go binary first, then runs the Kotlin tests.

stove-process Module

The stove-process module provides goApp() and processApp() out of the box --- no custom ApplicationUnderTest implementation needed:

dependencies {
    testImplementation("com.trendyol:stove-process")
}

The module handles:

  • Starting the binary as an OS process via ProcessBuilder
  • Mapping Stove configs to environment variables via envMapper {} or CLI arguments via argsMapper {}
  • Readiness checking (HTTP health, TCP port, custom probe, or fixed delay)
  • Graceful shutdown (SIGTERM → force-kill after timeout)
  • Stdout/stderr reading in a background thread

goApp() defaults the binary path from the go.app.binary system property. For other languages, use processApp() directly.

Configuration passing

Two mechanisms are available --- use one or both:

  • envMapper {} --- maps Stove configs to environment variables (Go convention)
  • argsMapper(prefix, separator) {} --- maps Stove configs to CLI arguments appended to the command
// Environment variables (default for Go)
envMapper {
    "database.host" to "DB_HOST"
    env("LOG_LEVEL", "debug")
}

// CLI arguments (for apps using flag-based config)
argsMapper(prefix = "--", separator = "=") {
    "database.host" to "db-host"       // --db-host=localhost
    arg("verbose")                     // --verbose
}

// Space separator produces two args per flag
argsMapper(prefix = "--", separator = " ") {
    "database.host" to "db-host"       // --db-host localhost
}

Both envProvider and argsProvider can be set on ProcessApplicationOptions simultaneously.

Stove Configuration

StoveConfig.kt
Stove()
    .with {
        httpClient {
            HttpClientSystemOptions(baseUrl = "http://localhost:$APP_PORT")
        }

        dashboard {
            DashboardSystemOptions(appName = "go-showcase")
        }

        tracing {
            enableSpanReceiver(port = OTLP_PORT)
        }

        kafka {
            KafkaSystemOptions(
                configureExposedConfiguration = { cfg ->
                    listOf("kafka.bootstrapServers=${cfg.bootstrapServers}")
                }
            )
        }

        postgresql {
            PostgresqlOptions(
                databaseName = "stove",
                configureExposedConfiguration = { cfg ->
                    listOf(
                        "database.host=${cfg.host}",
                        "database.port=${cfg.port}",
                        "database.name=stove",
                        "database.username=${cfg.username}",
                        "database.password=${cfg.password}"
                    )
                }
            ).migrations {
                register<ProductMigration>()
            }
        }

        goApp(
            target = ProcessTarget.Server(port = APP_PORT, portEnvVar = "APP_PORT"),
            envProvider = envMapper {
                "database.host" to "DB_HOST"
                "database.port" to "DB_PORT"
                "database.name" to "DB_NAME"
                "database.username" to "DB_USER"
                "database.password" to "DB_PASS"
                "kafka.bootstrapServers" to "KAFKA_BROKERS"
                env("OTEL_EXPORTER_OTLP_ENDPOINT", "localhost:$OTLP_PORT")
                env("KAFKA_LIBRARY") { System.getProperty("kafka.library") ?: "sarama" }
                env("STOVE_KAFKA_BRIDGE_PORT", stoveKafkaBridgePortDefault)
                env("GOCOVERDIR") {
                    System.getProperty("go.cover.dir")
                        ?.also { java.io.File(it).mkdirs() } ?: ""
                }
            }
        )
    }.run()

The envMapper block declaratively maps Stove's exposed configurations (from configureExposedConfiguration in each system) to environment variables the Go app expects. Use "stoveKey" to "ENV_VAR" for config-derived values and env("NAME", "value") for static ones. For apps that prefer CLI arguments, use argsMapper instead (or alongside). The stoveKafkaBridgePortDefault is a dynamically assigned port that the Stove Kafka system starts a gRPC server on --- the Go bridge connects to it.

Database Migration

Stove creates the table before the Go app starts:

ProductMigration.kt
class ProductMigration : DatabaseMigration<PostgresSqlMigrationContext> {
    override val order: Int = 1

    override suspend fun execute(connection: PostgresSqlMigrationContext) {
        connection.sql.execute(
            queryOf("""
                CREATE TABLE IF NOT EXISTS products (
                    id VARCHAR(255) PRIMARY KEY,
                    name VARCHAR(255) NOT NULL,
                    price DECIMAL(10, 2) NOT NULL
                )
            """).asExecute
        )
    }
}

Writing Tests

Tests use the standard Stove DSL --- identical to how you'd test a Spring Boot app:

GoShowcaseTest.kt
class GoShowcaseTest : FunSpec({

    test("should create a product and verify via HTTP, database, and traces") {
        stove {
            val productName = "Stove Go Showcase Product"
            val productPrice = 42.99
            var productId: String? = null

            // 1. Create via REST API
            http {
                postAndExpectBody<ProductResponse>(
                    uri = "/api/products",
                    body = CreateProductRequest(name = productName, price = productPrice).some()
                ) { actual ->
                    actual.status shouldBe 201
                    productId = actual.body().id
                }
            }

            // 2. Verify database state
            postgresql {
                shouldQuery<ProductRow>(
                    query = "SELECT id, name, price FROM products WHERE id = '$productId'",
                    mapper = productRowMapper
                ) { rows ->
                    rows.size shouldBe 1
                    rows.first().name shouldBe productName
                }
            }

            // 3. Read back via HTTP
            http {
                getResponse<ProductResponse>(uri = "/api/products/$productId") { actual ->
                    actual.status shouldBe 200
                    actual.body().name shouldBe productName
                }
            }

            // 4. Verify distributed traces from the Go app
            tracing {
                waitForSpans(4, 5000)
                shouldContainSpan("http.request")
                shouldNotHaveFailedSpans()
                spanCountShouldBeAtLeast(4)
                executionTimeShouldBeLessThan(10.seconds)
            }
        }
    }
})

Kafka Assertions

Verify that the Go app publishes events when products are created:

test("should publish ProductCreatedEvent when product is created") {
    stove {
        http {
            postAndExpectBody<ProductResponse>(
                uri = "/api/products",
                body = CreateProductRequest(name = "Kafka Product", price = 29.99).some()
            ) { actual ->
                actual.status shouldBe 201
            }
        }

        kafka {
            shouldBePublished<ProductCreatedEvent>(10.seconds) {
                actual.name == "Kafka Product" && actual.price == 29.99
            }
        }
    }
}

Verify that the Go app consumes events and updates state:

test("should consume product update events from Kafka") {
    stove {
        var productId: String? = null

        // Create a product via HTTP
        http {
            postAndExpectBody<ProductResponse>(
                uri = "/api/products",
                body = CreateProductRequest(name = "Original Name", price = 10.0).some()
            ) { actual ->
                productId = actual.body().id
            }
        }

        // Publish an update event — Go consumer picks it up and updates DB
        kafka {
            publish("product.update", ProductUpdateEvent(id = productId!!, name = "Updated Name", price = 99.99))
            shouldBeConsumed<ProductUpdateEvent>(10.seconds) {
                actual.id == productId && actual.name == "Updated Name"
            }
        }

        // Verify the database was updated
        postgresql {
            shouldQuery<ProductRow>(
                query = "SELECT id, name, price FROM products WHERE id = '$productId'",
                mapper = productRowMapper
            ) { rows ->
                rows.first().name shouldBe "Updated Name"
                rows.first().price shouldBe 99.99
            }
        }
    }
}

How Tracing Flows

Understanding how traces propagate between Stove and the Go app:

1. StoveKotestExtension starts a TraceContext before each test
2. Stove HTTP client injects `traceparent` header into requests
3. otelhttp middleware extracts traceparent, creates HTTP span as child
4. Handler passes r.Context() to DB functions
5. otelsql creates DB spans as children of the HTTP span
6. All spans share the same trace ID as the test
7. Spans are exported via OTLP gRPC to Stove's receiver
8. tracing { shouldContainSpan(...) } queries spans by trace ID
sequenceDiagram
    participant Test as Stove Test
    participant HTTP as Stove HTTP Client
    participant MW as otelhttp Middleware
    participant H as Go Handler
    participant DB as otelsql Driver
    participant Collector as Stove OTLP Receiver

    Test->>HTTP: postAndExpectBody(traceparent: 00-abc...)
    HTTP->>MW: POST /api/products + traceparent header
    MW->>MW: Extract trace context, create HTTP span (traceId=abc)
    MW->>H: r.Context() carries span
    H->>DB: ExecContext(ctx, INSERT...)
    DB->>DB: Create child span (traceId=abc)
    DB->>Collector: Export DB span
    MW->>Collector: Export HTTP span
    Test->>Collector: tracing { shouldContainSpan("http.request") }

Running

# From the go-showcase directory — runs all three Kafka libraries (sarama, franz, segmentio)
cd recipes/process/golang/go-showcase
./gradlew e2eTest

# Run a specific library only
./gradlew e2eTest_sarama
./gradlew e2eTest_franz
./gradlew e2eTest_segmentio

# With Go code coverage (see Code Coverage section below)
./gradlew e2eTestWithCoverage -Pgo.coverage=true

Each per-library task:

  1. Compiles the Go binary (go build)
  2. Starts PostgreSQL and Kafka containers (Testcontainers)
  3. Runs database migrations
  4. Starts the OTLP span receiver and Kafka bridge gRPC server
  5. Launches the Go binary with KAFKA_LIBRARY=<lib> and infrastructure env vars
  6. Runs the 5 Kotlin e2e tests
  7. Stops everything and cleans up

The e2eTest task runs all three sequentially (sarama → franz → segmentio), verifying the Stove Kafka bridge works across all supported Go Kafka libraries.

Code Coverage

Since Stove runs the Go binary as an OS process (not via go test), standard go test -cover doesn't apply. Go 1.20+ introduced integration test coverage: build with go build -cover, set GOCOVERDIR, and coverage data is written on graceful shutdown. This fits perfectly with Stove's lifecycle (SIGTERM → graceful shutdown → coverage files).

How It Works

1. go build -cover          → instruments the binary
2. GOCOVERDIR=/path         → tells the binary where to write coverage data
3. SIGTERM (Stove stop)     → graceful shutdown triggers coverage flush
4. go tool covdata textfmt  → converts raw data to standard coverage.out
5. go tool cover -func/-html → human-readable reports

Gradle Setup

The go-showcase recipe supports coverage via the -Pgo.coverage=true Gradle property. When disabled (default), there is zero overhead.

build.gradle.kts
val goExecutable = providers.environmentVariable("GO_EXECUTABLE").getOrElse("go")
val coverageEnabled = providers.gradleProperty("go.coverage")
    .map { it.toBoolean() }.getOrElse(false)
val goCoverDirPath = layout.buildDirectory.dir("go-coverage").get().asFile.absolutePath
val goCoverOutPath = layout.buildDirectory.dir("go-coverage").get().asFile
    .resolve("coverage.out").absolutePath

// Build with -cover when enabled
tasks.register<Exec>("buildGoApp") {
    val args = mutableListOf(goExecutable, "build")
    if (coverageEnabled) args.add("-cover")
    args.addAll(listOf("-o", goBinary.absolutePath, "."))
    commandLine(args)
}

// Pass GOCOVERDIR to the test JVM, disable build cache for coverage runs
tasks.register<Test>("e2eTest_sarama") {
    // ...
    if (coverageEnabled) {
        systemProperty("go.cover.dir", goCoverDirPath)
        outputs.cacheIf { false }  // Coverage data is a side effect
    }
}

Coverage report tasks (only registered when coverage is enabled):

build.gradle.kts
if (coverageEnabled) {
    tasks.register<Exec>("goCoverageReport") {
        mustRunAfter(kafkaE2eTasks)
        commandLine(goExecutable, "tool", "covdata", "textfmt",
            "-i=$goCoverDirPath", "-o=$goCoverOutPath")
    }
    tasks.register<Exec>("goCoverageSummary") {
        dependsOn("goCoverageReport")
        commandLine(goExecutable, "tool", "cover", "-func=$goCoverOutPath")
    }
    tasks.register<Exec>("goCoverageHtml") {
        dependsOn("goCoverageReport")
        commandLine(goExecutable, "tool", "cover", "-html=$goCoverOutPath", "-o=coverage.html")
    }
    tasks.register("e2eTestWithCoverage") {
        dependsOn(kafkaE2eTasks)
        finalizedBy("goCoverageSummary", "goCoverageHtml")
    }
}

Stove Configuration

Pass GOCOVERDIR to the Go process via envMapper. The directory is created lazily; when coverage is disabled the value is empty and Go ignores it:

StoveConfig.kt
goApp(
    target = ProcessTarget.Server(port = APP_PORT, portEnvVar = "APP_PORT"),
    envProvider = envMapper {
        // ... other mappings ...
        env("GOCOVERDIR") {
            System.getProperty("go.cover.dir")
                ?.also { java.io.File(it).mkdirs() } ?: ""
        }
    }
)

SIGPIPE Handling

When a Go process runs under Java's ProcessBuilder, the stdout pipe can close before the process exits. If Go writes to the closed pipe (e.g. log.Println during shutdown), it receives SIGPIPE and terminates immediately --- before the coverage counters are flushed. Add this at the top of main():

main.go
func main() {
    // Ignore SIGPIPE so log writes to a closed stdout pipe don't kill the process.
    // This ensures clean shutdown (and coverage flush) when run under a process manager.
    signal.Ignore(syscall.SIGPIPE)
    // ...
}

This is a good practice for any long-running Go service managed by an external process, not just for coverage.

Running

# Without coverage (default — zero overhead)
./gradlew e2eTest_sarama

# With coverage — runs tests + generates reports
./gradlew e2eTestWithCoverage -Pgo.coverage=true

# Output includes per-function coverage:
# github.com/.../handlers.go:15:   HandleCreate    85.7%
# github.com/.../db.go:23:         QueryProducts   100.0%
# total:                           (statements)    81.9%

The HTML report is written to build/go-coverage/coverage.html.

Why no Stove framework changes needed

Everything is achievable with existing primitives: the -cover build flag is a Gradle task concern, GOCOVERDIR is just another env var via envMapper, coverage processing happens after tests via Gradle tasks, and graceful shutdown (SIGTERM) is already handled by ProcessApplicationUnderTest.stop().

Adapting for Other Languages

The same pattern works for any language. Replace the Go-specific parts:

Part Go Python Node.js Rust
Build step go build (none or pip install) npm install && npm run build cargo build
Binary Single executable python app.py node dist/index.js Single executable
OTel HTTP otelhttp.NewHandler opentelemetry-instrumentation-flask @opentelemetry/instrumentation-http tracing-opentelemetry
OTel DB otelsql opentelemetry-instrumentation-psycopg2 @opentelemetry/instrumentation-pg tracing-opentelemetry
Kafka stove-kafka bridge (sarama / franz-go / kafka-go) (bridge library needed) (bridge library needed) (bridge library needed)
Config Env vars Env vars Env vars Env vars

The Kotlin test side stays exactly the same --- only processApp() / goApp() and the envMapper change.