Testing Guide
All code contributions MUST include tests. This is a mandatory requirement for Aether.
Running Tests
bash
cd goway
make test # Run all tests
go test -v ./... # Run tests with verbose output
go test -cover ./... # Run tests with coverage
go test -race ./... # Run tests with race detectorTest Structure
Tests should be placed alongside the code they test:
goway/internal/
├── application/task/
│ ├── registry.go
│ ├── registry_test.go
│ ├── config.go
│ └── config_test.go
├── infrastructure/adapter/llm/
│ ├── service.go
│ └── service_test.go
└── test/mocks/
└── repositories.go # Shared mock implementationsTesting Guidelines
1. Unit Tests
Required for all business logic:
- Test each public function/method
- Cover happy path and error scenarios
- Use table-driven tests for multiple cases
Example:
go
func TestNormalizeEvent(t *testing.T) {
tests := []struct {
name string
source string
event string
action *string
expected *string
}{
{
name: "github issues opened",
source: "github",
event: "issues",
action: strPtr("opened"),
expected: strPtr("ticket.created"),
},
{
name: "unknown source returns nil",
source: "unknown",
event: "issue",
action: nil,
expected: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := NormalizeEvent(tt.source, tt.event, tt.action)
if tt.expected == nil {
assert.Nil(t, result)
} else {
assert.Equal(t, *tt.expected, *result)
}
})
}
}2. Mock Usage
Use testify/mock for dependencies:
go
import "goway/internal/test/mocks"
func TestGetWorkflowContext(t *testing.T) {
mockRepo := &mocks.AgentRepository{}
mockRepo.On("GetByID", ctx, 1).Return(agent, nil)
service := NewAgentService(mockRepo)
result, err := service.GetAgent(ctx, 1)
assert.NoError(t, err)
assert.Equal(t, "pm_sarah", result.ID)
mockRepo.AssertExpectations(t)
}3. Test Naming
Use descriptive names that explain what is being tested:
go
func TestGetWorkflowContext(t *testing.T) {
t.Run("returns error when mapping repo is nil", func(t *testing.T) {
// Test implementation
})
t.Run("returns workflow context with states and labels", func(t *testing.T) {
// Test implementation
})
}Coverage Requirements
- New code: ≥80% test coverage
- Critical paths (LLM service, task registry): ≥90% coverage
Check coverage:
bash
go test -cover ./...
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.outWhat to Test
✅ DO Test
- Business logic and domain rules
- Event normalization and task resolution
- Configuration parsing
- Error handling paths
- Edge cases and boundary conditions
Example - Testing Event Normalization:
go
func TestTaskRegistry_NormalizeEvent(t *testing.T) {
registry := NewTaskRegistry(mockMappingRepo)
tests := []struct {
name string
source string
eventName string
action string
expectedTrigger string
expectError bool
}{
{
name: "github issue opened",
source: "github",
eventName: "issues",
action: "opened",
expectedTrigger: "issue.created",
expectError: false,
},
{
name: "unmapped event",
source: "unknown",
eventName: "unknown",
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
trigger, err := registry.NormalizeEvent(tt.source, tt.eventName, tt.action)
if tt.expectError {
assert.Error(t, err)
return
}
assert.NoError(t, err)
assert.Equal(t, tt.expectedTrigger, trigger)
})
}
}❌ DON'T Test
- Third-party libraries
- Simple getters/setters
- Main function setup code
Common Testing Patterns
Testing with Context
go
func TestWithContext(t *testing.T) {
ctx := context.Background()
// Test with timeout
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
result, err := service.Process(ctx, input)
assert.NoError(t, err)
}Testing Error Cases
go
func TestErrorHandling(t *testing.T) {
t.Run("returns ErrNotFound when agent doesn't exist", func(t *testing.T) {
mockRepo := &mocks.AgentRepository{}
mockRepo.On("GetByID", mock.Anything, "nonexistent").
Return(nil, repository.ErrNotFound)
service := NewAgentService(mockRepo)
_, err := service.GetAgent(context.Background(), "nonexistent")
assert.Error(t, err)
assert.ErrorIs(t, err, repository.ErrNotFound)
})
}Testing with Database
For integration tests that require a database:
go
func setupTestDB(t *testing.T) *pgxpool.Pool {
dbURL := os.Getenv("TEST_DATABASE_URL")
if dbURL == "" {
t.Skip("TEST_DATABASE_URL not set")
}
pool, err := pgxpool.New(context.Background(), dbURL)
require.NoError(t, err)
t.Cleanup(func() {
pool.Close()
})
return pool
}
func TestAgentRepository_Integration(t *testing.T) {
db := setupTestDB(t)
repo := postgres.NewAgentRepository(db)
agent := &entity.Agent{
ID: "test_agent",
Name: "Test Agent",
Role: "pm",
Prompt: "Test prompt",
}
err := repo.Create(context.Background(), agent)
assert.NoError(t, err)
retrieved, err := repo.GetByID(context.Background(), "test_agent")
assert.NoError(t, err)
assert.Equal(t, agent.Name, retrieved.Name)
}Best Practices
1. Keep Tests Fast
- Mock external dependencies
- Avoid unnecessary sleeps
- Use parallel tests where possible
go
func TestParallel(t *testing.T) {
tests := []struct {
name string
input int
}{
{"test1", 1},
{"test2", 2},
}
for _, tt := range tests {
tt := tt // Capture range variable
t.Run(tt.name, func(t *testing.T) {
t.Parallel() // Run tests in parallel
// Test implementation
})
}
}2. Use Test Helpers
go
func createTestAgent(t *testing.T, id string) *entity.Agent {
t.Helper()
return &entity.Agent{
ID: id,
Name: "Test Agent",
Role: "pm",
Prompt: "Test prompt",
}
}
func TestWithHelper(t *testing.T) {
agent := createTestAgent(t, "test_id")
// Use agent in test
}3. Clean Up Resources
go
func TestWithCleanup(t *testing.T) {
resource := setupResource()
t.Cleanup(func() {
resource.Close()
})
// Test implementation
}Debugging Tests
Run Specific Test
bash
# Run specific test
go test -v -run TestAgentRepository_GetByID ./...
# Run specific subtest
go test -v -run TestAgentRepository_GetByID/returns_error_when_not_found ./...Print Debug Information
go
func TestWithDebug(t *testing.T) {
result := doSomething()
t.Logf("Result: %+v", result) // Only shown on failure or with -v
assert.Equal(t, expected, result)
}Enable Race Detector
bash
go test -race ./...Pull Request Checklist
Before submitting a PR, ensure:
- [ ] All existing tests pass (
make test) - [ ] New code has accompanying tests
- [ ] Tests cover both success and error cases
- [ ] No test uses hardcoded sleep/delays
- [ ] Mocks are used for external dependencies
- [ ] Test coverage meets requirements (≥80%)
- [ ] Tests are clear and well-named
- [ ] No flaky tests (tests pass consistently)
