Testing Guide¶
Last Updated: 2025-11-23 Difficulty: Intermediate
This guide covers testing strategies, patterns, and best practices for contributing to ocmonica. It complements the comprehensive TESTING.md in the project root.
Table of Contents¶
- Quick Start
- Test Strategy
- Running Tests
- Test Structure
- Coverage Requirements
- Mocking and Fixtures
- Integration Testing
- Benchmarking
- CI Testing
- Best Practices
Quick Start¶
Running All Tests¶
# Run all tests
task test
# Run with coverage report
task test:coverage
# Run with race detection
go test -v -race ./...
Running Specific Tests¶
# Test a specific package
go test -v ./internal/api/rest/handlers/
# Test a specific function
go test -v -run TestFileHandler_Upload ./internal/api/rest/handlers/
# Test a specific subtest
go test -v -run TestFileHandler_Upload/successful_upload ./internal/api/rest/handlers/
Viewing Coverage¶
# Generate and view HTML coverage report
task test:coverage
# View coverage by file
go tool cover -func=coverage.out
# View total coverage
go tool cover -func=coverage.out | tail -1
Test Strategy¶
Testing Philosophy¶
Ocmonica follows an integration-first testing approach:
- Integration Tests: Primary testing method using real dependencies
- Unit Tests: For isolated business logic and utilities
- Minimal Mocking: Only mock external services (APIs, cloud storage)
- In-Memory Databases: Fast, isolated tests without mocks
Why integration testing?
- Tests actual behavior, catches more bugs
- Easier to maintain (no mock synchronization)
- In-memory SQLite is fast enough
- Validates full stack: handler → service → repository
- Follows Go and ConnectRPC best practices
Test Pyramid¶
/\
/ \ E2E (minimal, full stack)
/ \
/------\ Integration (primary, in-memory)
/ \
/----------\ Unit (utilities, pure functions)
Coverage Goals:
- Service Layer: 85%+ (business logic critical)
- Repository Layer: 80%+ (data access critical)
- Handlers: 75%+ (integration coverage)
- Models: 100% (simple to achieve)
- Overall: 80%+
Running Tests¶
Task Commands¶
# Core testing commands
task test # Run all tests
task test:coverage # Run with HTML coverage report
task test:race # Run with race detector
task test:verbose # Run with verbose output
task test:short # Run quick tests only
Go Test Flags¶
# Verbose output
go test -v ./...
# Race detection (recommended for PRs)
go test -race ./...
# Coverage profile
go test -coverprofile=coverage.out ./...
# Coverage with HTML report
go test -coverprofile=coverage.out ./... && go tool cover -html=coverage.out
# Run tests in parallel
go test -parallel 4 ./...
# Benchmark tests
go test -bench=. ./...
# Specific timeout
go test -timeout 30s ./...
Package-Specific Testing¶
# Handler tests
go test -v ./internal/api/rest/handlers/
# Service tests
go test -v ./internal/service/
# Repository tests
go test -v ./internal/repository/sqlite/
# gRPC tests
go test -v ./internal/api/grpc/
# Model tests
go test -v ./internal/models/
Test Structure¶
File Organization¶
Each package follows the same pattern:
internal/service/
├── file_service.go # Implementation
├── file_service_test.go # Unit/integration tests
└── file_service_bench_test.go # Benchmarks (optional)
Test Function Naming¶
// Pattern: Test<Type>_<Method>
func TestFileService_Upload(t *testing.T)
// Pattern: Test<Type>_<Method>_<Scenario> (for table-driven tests)
func TestFileService_Upload_ValidationErrors(t *testing.T)
// Pattern: Benchmark<Type>_<Method>
func BenchmarkFileService_Upload(b *testing.B)
Setup Helper Pattern¶
Every test file should have a setup helper that creates isolated test instances:
// setupTestFileHandler creates a fresh handler instance with in-memory database
func setupTestFileHandler(t *testing.T) (*FileHandler, func()) {
t.Helper() // Mark as helper for better error reporting
// 1. Create in-memory database
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
t.Fatalf("Failed to open database: %v", err)
}
// 2. Run migrations
if err := sqlite.RunMigrations(db); err != nil {
t.Fatalf("Failed to run migrations: %v", err)
}
// 3. Create temporary storage
tmpDir := t.TempDir() // Auto-cleanup
// 4. Create config
cfg := &config.Config{
Storage: config.StorageConfig{
BasePath: tmpDir,
},
}
// 5. Build dependencies
repo := &repository.Repository{
File: sqlite.NewFileRepository(db),
Tag: sqlite.NewTagRepository(db),
}
service := service.NewFileService(repo, cfg)
handler := NewFileHandler(service)
// 6. Return cleanup function
cleanup := func() {
_ = db.Close()
}
return handler, cleanup
}
Subtest Pattern¶
Use subtests to organize test cases:
func TestFileHandler_Get(t *testing.T) {
handler, cleanup := setupTestFileHandler(t)
defer cleanup()
t.Run("successful retrieval", func(t *testing.T) {
// Happy path test
})
t.Run("file not found", func(t *testing.T) {
// Error case test
})
t.Run("invalid ID format", func(t *testing.T) {
// Validation test
})
t.Run("permission denied", func(t *testing.T) {
// Authorization test
})
}
Coverage Requirements¶
Current Coverage (as of 2025-11-23)¶
| Package | Coverage | Target | Status |
|---|---|---|---|
internal/service |
81.9% | 85% | ⚠️ Needs improvement |
internal/api/rest/handlers |
76.9% | 75% | ✅ Good |
internal/api/grpc |
72.4% | 70% | ✅ Good |
internal/models |
100.0% | 100% | ✅ Perfect |
internal/repository/sqlite |
61.1% | 80% | ⚠️ Needs improvement |
pkg/config |
77.8% | 60% | ✅ Excellent |
| Overall | ~35% | 80% | ⚠️ Improving |
Coverage Targets by Layer¶
- Models (100%): Complete coverage required
- Simple structs and methods
- Easy to achieve 100%
-
No excuse for missing coverage
-
Services (85%+): High coverage required
- Business logic lives here
- Critical for correctness
-
Test all edge cases
-
Repositories (80%+): High coverage required
- Data access layer
- SQL query correctness
-
Error handling
-
Handlers (75%+): Good coverage required
- Integration coverage
- Request/response validation
-
Error handling
-
Utilities (60%+): Moderate coverage acceptable
- Helper functions
- Less critical paths
Measuring Coverage¶
# Generate coverage profile
go test -coverprofile=coverage.out ./...
# View coverage by function
go tool cover -func=coverage.out
# View coverage by package
go tool cover -func=coverage.out | grep -E "^github.com"
# Open HTML report
go tool cover -html=coverage.out
# Get total coverage percentage
go tool cover -func=coverage.out | tail -1
Coverage Best Practices¶
- Don't chase 100% unless it's models
- Focus on critical paths (happy path + error handling)
- Ignore generated code (protobuf, mocks)
- Test behavior, not lines (coverage is a metric, not a goal)
- Write meaningful tests (don't just hit lines)
Mocking and Fixtures¶
When to Mock¶
✅ DO mock:
- External HTTP APIs (third-party services)
- Cloud storage services (S3, GCS)
- Time-dependent operations (use time.Now injection)
- Email/SMS gateways
❌ DON'T mock:
- Database (use in-memory SQLite instead)
- File system (use t.TempDir() instead)
- Internal interfaces (use real implementations)
- Generated code (ConnectRPC, protobuf)
Fixture Data¶
Create helper functions for test data:
// createTestFile creates a test file in the database
func createTestFile(t *testing.T, repo *repository.Repository, name string) *models.File {
t.Helper()
file := &models.File{
ID: uuid.New().String(),
Name: name,
Path: name,
Type: models.FileTypeRegular,
MimeType: "text/plain",
Size: 1024,
}
if err := repo.File.Create(context.Background(), file); err != nil {
t.Fatalf("Failed to create test file: %v", err)
}
return file
}
// createTestDirectory creates a test directory in the database
func createTestDirectory(t *testing.T, repo *repository.Repository, name string, parentID *string) *models.File {
t.Helper()
dir := &models.File{
ID: uuid.New().String(),
Name: name,
Path: name,
Type: models.FileTypeDirectory,
MimeType: "inode/directory",
ParentID: parentID,
}
if err := repo.File.Create(context.Background(), dir); err != nil {
t.Fatalf("Failed to create test directory: %v", err)
}
return dir
}
Table-Driven Tests with Fixtures¶
func TestFileService_Upload_ValidationErrors(t *testing.T) {
tests := []struct {
name string
filename string
content string
wantErr bool
errContains string
}{
{
name: "empty filename",
filename: "",
content: "test",
wantErr: true,
errContains: "filename is required",
},
{
name: "path traversal attempt",
filename: "../etc/passwd",
content: "malicious",
wantErr: true,
errContains: "invalid filename",
},
{
name: "valid upload",
filename: "document.pdf",
content: "valid content",
wantErr: false,
},
}
service, _, cleanup := setupTestFileService(t)
defer cleanup()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
reader := strings.NewReader(tt.content)
_, err := service.Upload(context.Background(), tt.filename, reader, nil)
if tt.wantErr {
if err == nil {
t.Error("Expected error but got nil")
} else if !strings.Contains(err.Error(), tt.errContains) {
t.Errorf("Expected error containing '%s', got: %v", tt.errContains, err)
}
} else {
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
}
})
}
}
Integration Testing¶
Full-Stack Integration Tests¶
Located in test/ directory for cross-layer integration tests:
// test/auth_integration_test.go
func TestAuthenticationFlow(t *testing.T) {
// Setup full stack
db := setupDatabase(t)
services := setupServices(db)
server := setupServer(services)
defer server.Close()
client := setupClient(server.URL)
ctx := context.Background()
t.Run("complete auth flow", func(t *testing.T) {
// 1. Register user
registerResp, err := client.Register(ctx, &v1.RegisterRequest{
Username: "testuser",
Email: "test@example.com",
Password: "SecurePass123!",
})
require.NoError(t, err)
// 2. Login
loginResp, err := client.Login(ctx, &v1.LoginRequest{
Username: "testuser",
Password: "SecurePass123!",
})
require.NoError(t, err)
accessToken := loginResp.Msg.AccessToken
// 3. Use access token for authenticated request
// ...
// 4. Refresh token
// ...
// 5. Logout
// ...
})
}
Testing with Real HTTP Requests¶
Use httptest for REST handlers:
func TestFileHandler_Upload(t *testing.T) {
handler, cleanup := setupTestFileHandler(t)
defer cleanup()
t.Run("multipart upload", func(t *testing.T) {
// Create multipart form
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
// Add file
part, _ := writer.CreateFormFile("file", "test.txt")
part.Write([]byte("test content"))
writer.Close()
// Create request
e := echo.New()
req := httptest.NewRequest(http.MethodPost, "/files", body)
req.Header.Set("Content-Type", writer.FormDataContentType())
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Execute
err := handler.Upload(c)
// Assert
assert.NoError(t, err)
assert.Equal(t, http.StatusCreated, rec.Code)
// Parse response
var response map[string]interface{}
json.Unmarshal(rec.Body.Bytes(), &response)
assert.Equal(t, "test.txt", response["name"])
})
}
Testing ConnectRPC Handlers¶
Use real ConnectRPC clients and httptest servers:
func TestFileServiceHandler_UploadFile(t *testing.T) {
client, cleanup := setupTestFileServiceHandler(t)
defer cleanup()
ctx := context.Background()
t.Run("streaming upload", func(t *testing.T) {
// Create upload stream
stream := client.UploadFile(ctx)
// Send metadata
err := stream.Send(&v1.UploadFileRequest{
Data: &v1.UploadFileRequest_Metadata{
Metadata: &v1.FileMetadata{
Name: "test.txt",
},
},
})
require.NoError(t, err)
// Send chunks
content := []byte("Hello, world!")
chunkSize := 4
for i := 0; i < len(content); i += chunkSize {
end := i + chunkSize
if end > len(content) {
end = len(content)
}
err = stream.Send(&v1.UploadFileRequest{
Data: &v1.UploadFileRequest_Chunk{
Chunk: content[i:end],
},
})
require.NoError(t, err)
}
// Close and receive response
response, err := stream.CloseAndReceive()
require.NoError(t, err)
assert.Equal(t, int64(len(content)), response.Msg.File.Size)
})
}
Benchmarking¶
Writing Benchmarks¶
// file_service_bench_test.go
func BenchmarkFileService_Upload(b *testing.B) {
service, tmpDir, cleanup := setupTestFileService(b)
defer cleanup()
ctx := context.Background()
content := bytes.Repeat([]byte("x"), 1024) // 1KB file
b.ResetTimer() // Reset timer after setup
for i := 0; i < b.N; i++ {
reader := bytes.NewReader(content)
filename := fmt.Sprintf("file%d.txt", i)
_, err := service.Upload(ctx, filename, reader, nil)
if err != nil {
b.Fatalf("Upload failed: %v", err)
}
}
}
func BenchmarkFileService_Get(b *testing.B) {
service, _, cleanup := setupTestFileService(b)
defer cleanup()
ctx := context.Background()
// Setup: Create test file
reader := strings.NewReader("test content")
file, _ := service.Upload(ctx, "bench.txt", reader, nil)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := service.Get(ctx, file.ID)
if err != nil {
b.Fatalf("Get failed: %v", err)
}
}
}
Running Benchmarks¶
# Run all benchmarks
go test -bench=. ./...
# Run specific benchmark
go test -bench=BenchmarkFileService_Upload ./internal/service/
# With memory allocation stats
go test -bench=. -benchmem ./...
# Multiple runs for accuracy
go test -bench=. -count=5 ./...
# Compare benchmarks
go test -bench=. -count=5 ./... > old.txt
# ... make changes ...
go test -bench=. -count=5 ./... > new.txt
benchstat old.txt new.txt
CI Testing¶
GitHub Actions Integration¶
All tests run automatically on:
- Every push to
main - Every pull request
- Manual workflow dispatch
Configuration: .github/workflows/docker.yml
CI Test Requirements¶
All PRs must:
- ✅ Pass all tests (
go test ./...) - ✅ Pass with race detection (
go test -race ./...) - ✅ Include tests for new functionality
- ✅ Maintain or improve coverage
- ✅ Pass linting (
task lint)
Local CI Simulation¶
Run the same checks that CI runs:
# Full CI check suite
task ci
# Individual checks
task test # Run tests
task test:race # Race detection
task lint # Linting
task security:scan # Security scan
Best Practices¶
✅ DO¶
- Use integration testing with in-memory databases
- Create fresh instances for each subtest
- Test both happy and error paths
- Use descriptive test names (
TestFileHandler_Upload/successful_upload) - Clean up resources with
defer cleanup() - Test with race detection (
go test -race) - Use table-driven tests for multiple scenarios
- Test actual behavior, not implementation details
- Include error message validation in error tests
- Test boundary conditions (empty, nil, max values)
- Use
t.Helper()in setup functions - Use
t.TempDir()for automatic cleanup
❌ DON'T¶
- Don't mock unnecessarily - use real dependencies with in-memory DB
- Don't reuse Echo instances across subtests
- Don't skip cleanup - always defer cleanup functions
- Don't test generated code - protobuf/ConnectRPC generated code
- Don't write flaky tests - ensure deterministic behavior
- Don't ignore race warnings - fix race conditions
- Don't test private functions - test through public APIs
- Don't hardcode IDs - use returned IDs from create operations
- Don't commit commented-out tests - either fix or delete
- Don't use
time.Sleep()for synchronization - use proper synchronization
Common Pitfalls¶
1. Echo Context Reuse¶
❌ Wrong:
e := echo.New()
t.Run("test1", func(t *testing.T) {
c := e.NewContext(req1, rec1)
// ...
})
t.Run("test2", func(t *testing.T) {
c := e.NewContext(req2, rec2)
// Path params from test1 may leak!
})
✅ Correct:
t.Run("test1", func(t *testing.T) {
e := echo.New()
c := e.NewContext(req1, rec1)
// ...
})
t.Run("test2", func(t *testing.T) {
e := echo.New()
c := e.NewContext(req2, rec2)
// Clean slate!
})
2. Forgetting t.Helper()¶
❌ Wrong:
func setupTest(t *testing.T) *Service {
// If this fails, error points here, not the test
service := NewService()
return service
}
✅ Correct:
func setupTest(t *testing.T) *Service {
t.Helper() // Errors will point to the calling test
service := NewService()
return service
}
3. Not Cleaning Up Resources¶
❌ Wrong:
func TestSomething(t *testing.T) {
handler, _ := setupTestHandler(t)
// Database connection leaks!
}
✅ Correct:
func TestSomething(t *testing.T) {
handler, cleanup := setupTestHandler(t)
defer cleanup()
// Resources properly closed
}
Testing Checklist¶
Before submitting a PR:
- All tests pass:
task test - Tests pass with race detection:
go test -race ./... - Coverage maintained or improved:
task test:coverage - New features have tests (both success and error cases)
- Tests are deterministic (no flakiness)
- Cleanup functions are called (
defer cleanup()) - Test names are descriptive and follow convention
- No hardcoded values that should be dynamic
-
t.Helper()used in setup functions - No commented-out or skipped tests without explanation
Resources¶
Documentation¶
TESTING.mdin project root - Comprehensive testing guide- Go Testing Package - Official documentation
- Echo Testing Guide - Echo framework
- ConnectRPC Testing - ConnectRPC
Examples in Project¶
- Handler Tests:
internal/api/rest/handlers/file_test.go - Service Tests:
internal/service/file_service_test.go - Repository Tests:
internal/repository/sqlite/file_test.go - gRPC Tests:
internal/api/grpc/file_service_test.go - Integration Tests:
test/grpc_auth_integration_test.go
Tools¶
- testify - Assertion library
- golangci-lint - Includes test linters
- benchstat - Compare benchmarks
- Task - Test task automation
Getting Help¶
If you have questions about testing:
- Check
TESTING.mdin project root for comprehensive examples - Look at existing tests in the same package
- Ask in GitHub Discussions
- Open an issue for clarification
Happy testing!