Skip to main content

Why group and order imports in Go?

· One min read

When reviewing Go code, I usually find imports are not orderly grouped. For example, 3rd party imports could be in the middle of builtin imports or a local import is placed before 3rd party imports. Besides the readability concern, there is a possibility of impacting the logic. In Go, the order of import packages will decide the order of package initialisations. Therefore, if you want to initialise built-in packages first, you should import all built-in packages before any other. And, you also may want to import 3rd party packages before any local packages. Naturally, the order will become built-in packages, 3rd party packages then local packages.

However, you may ask why do I need to initialise packages in order? The reason is that you don't know if any of those packages will have any explicit or implicit dependencies and if any of them will require a certain order of initialisation. As a safety measure, it's always good to import them in a reasonable.

Integration Testing with Github Actions

· 2 min read

Integration testing is another layer of testing after unit testing. It groups multiple modules together and applies functional tests to those groups. In the scope of a backend service, it could be testing your module against a database or an external service. In this post, we'll go through setting integration testing with Go, PostgreSQL, Flyway, Github Actions.

Let's start with this sample Go app. It includes Test_LoadDataFromDB to test the function LoadDataFromDB which has interactions with a PostgreSQL database. And looking at goapp_integration_test.go, you may notice these go:build lines:

//go:build integration
// +build integration

They are added so these tests will only run with a specific build tag:

go test -tags=integration

It'll help to separate integration tests from unit tests so you'll only run them for specific occasions.

The next step is to create a PostgreSQL service in our CI via service containers by adding these lines:

services:
postgres:
image: postgres:13
env:
# Provide the password for postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5

If you need to set up the database before your integration tests, you could use flyway to run the migration script. And luckily, there is a Github action for that:

- uses: joshuaavalon/flyway-action@v1
with:
url: jdbc:postgresql://postgres:5432/postgres
user: postgres
password: postgres
locations: filesystem:./migration/sql

In the above config, ./migration/sql is the path to migration SQL scripts.

Finally, you can just add the go test command to run integration tests. It will also run unit tests so no need to have a separate step for that:

- name: Run Tests
run: |
go test -v ./... -tags=integration

The work could be found in this repository. Although this is just a simplified version, you can just add more docker containers into your CI if your system is more complicated.

Some thoughts on writing tests

· 2 min read

It isn't difficult to realise that testing in general and unit tests are important. It ensures your code work as expected and it guarantees that your code won't be broken in the future. If it's so important, let's write as many tests as possible. If we're able to cover all scenarios, our software will be perfect and there'll be no bug.

Well, tests don't come free. It needs time and effort to write good tests. But why do we need good tests? When we write software, we rely on tests to make sure of the correctness. But how can we make sure of the correctness of our tests? We can't just write tests for our tests. Therefore, we need to make sure tests are easy to read and easy to understand. It should be so simple that it's hard to make mistakes.

Maintaining is another problem. Like software, we don't write tests and leave them there untouched forever. Tests should be maintainable so they always work correctly to verify your code. Maintaining tests are probably more important as they are prone to break than your software. Therefore, when writing tests, we shouldn't have it in mind that we only write the test once. Believe me, it'll backfire very soon.

Writing good tests is difficult and maintaining them isn't easy. That's why I always believe that we should have a balance of the number of tests. We should have enough tests that cover the most important logic branches. We shouldn't have too many tests that become a burden. When you find that you spend more time maintaining your tests than maintain your software, then you probably have too many tests.

Pre-commit hooks with Go

· 2 min read

Git hook scripts are becoming more and more popular. We run our hooks on every commit to automatically point out some simple issues in code before submission to code review. By pointing out these issues in this early phase, it allows a faster feedback loop than waiting for CI. In Javascript world, it's husky that makes Git hooks easy. While in Go, Git hooks are still uncommon. In this post, we'll go through steps to use pre-commit to set up Git hook for a Go project.

The installation's pretty easy by following their document. I have a Macbook so I'll use brew to install it:

brew install pre-commit

Next, we'll add pre-commit configuration by creating a file named .pre-commit-config.yaml. You can either create the file from scratch or from a provided sample like:

 pre-commit sample-config > .pre-commit-config.yaml

The configuration file will look like:

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files

Then I'll add the following lines in order to run golangci-lint before every commit that has Go files:

- repo: https://github.com/golangci/golangci-lint
rev: v1.41.1
hooks:
- id: golangci-lint

For the full set of options, please check their document.

After that, run the below command to set up the git scripts:

pre-commit install

Now pre-commit will run automatically on git commit. You can also use pre-commit run -a to run the git hook against all files.

Yeap, that's all about it. For reference, you can find an example of using pre-commit in a Go project in this repo.

Dependency injection with Go

· 5 min read

One of the advantages of using Go is that programmers can quickly onboard and start writing code. And I've seen a colleague who was able to read Go code in the first day and submit code change for review in the third day. Because the language is simple and straightforward, Go programmers can start writing production code without much knowledge about OOP or design patterns like dependency injection. In this post, we'll discuss the importance of dependency injection and how to apply it in Go effectively.

What is dependency injection?

In software engineering, dependency injection is a technique in which an object receives other objects that it depends on. These other objects are called dependencies. In the typical "using" relationship the receiving object is called a client and the passed (that is, "injected") object is called a service. The code that passes the service to the client can be many kinds of things and is called the injector. Instead of the client specifying which service it will use, the injector tells the client what service to use. The "injection" refers to the passing of a dependency (a service) into the object (a client) that would use it.

The above is quoted from Wikipedia but I find it rather vague in Go context. So let's see how I understand it.

Firstly, what is a dependency? In my understanding, when a Go object or module (named A) depends on or relying on some functionalities from another Go object or module (named B), we will say B is a dependency of A. Then A can have multiple dependencies and B can be the dependency of different objects/modules. In reality, these connections only become more complicated over time when your business logic grow.

In this particular context, dependency injection is a technique in which the dependency B is passed ("injected") to A. This work can be done manually but it's usually boring and repetitive. Therefore, it's usually done with the help of a library. And in Java world, Dagger and Spring are famous libraries that handles dependency injection very well. In Go, dig works quite well.

Why should we use dependency injection?

Dependency injection basically implements the dependency inversion principle. Therefore, it allows to decouple modules or a high-level modules should not depend on low-level modules, instead both should depend on abstractions (in Go, it's interfaces).

  • Modules are replacable and they can be replaced with mocks in order to improve Unit testing.
  • The application becomes more flexible as each module can be replaced, extended or upgraded easily.
  • Because modules are loose coupling, they can be developed in parallel hence to improve the development velocity.

In Go, proper dependency injection can help to stucture codes much better. Firstly, dependencies are clearly organized and defined by contracts and global variables are avoided as dependencies are now injected instead. Secondly, it also means better modularisation and the code itself becomes easier to understand or to read.

How to implement in Go?

However, manually injecting dependencies injection isn't clean and could be verbose. Without the help of a good library, the work can be manual and error-prone. Luckily there are some good libraries out there:

  • dip: Only supports to identify dependencies by types. As a result, there is a high chance of conflicts.
  • wire: Supprots dependency injection by generating code automatically. It's an interesting approach but still there is a huge amount of code generated which could be hard to review.

Considering writing an injector isn't difficult. I also wrote my own injector. It uses struct tags to indicate injection and then the library will use reflect to inject the dependencies.

// ServiceAImpl is the example of an implementation.
type ServiceAImpl struct {}

// ServiceBImpl is another example of implementation that need to be injected.
type ServiceBImpl struct {
// Here you can notice that ServiceBImpl requests a dependency with the type of *ServiceAImpl.
ServiceA *ServiceAImpl `injector:"auto"`
}

func yourInitFunc() {
i := injector.New()

// add ServiceAImpl to the injector
i.Component(&ServiceAImpl{})

// create an instance of ServiceBImpl and inject its dependencies
b := &ServiceBImpl{}
i.Component(b)
}

It also allows initializing a dependency by functions and factories:

// ServiceA has Logger as a dependency
type ServiceA struct {
Logger Logger `injector:"logger"`
}

func newServiceA() (*ServiceA, error) {
// init your serviceA here
}

type ServiceB struct {
Logger Logger `injector:"logger"`
}

// Create creates a new instance of ServiceA
func (f ServiceBFactory) Create() (interface{}, error) {
// logic to create A via Config
return &ServiceB{}, nil
}

// init func
func yourInitFunc() {
i := injector.New()
i.Component("logger", Logger{}),
// serviceA will be created and registered, logger will also be injected
i.ComponentFromFunc(newServiceA),

// Create ServiceB via Factory, dependencies will be injected.
i.ComponentFromFactory(&ServiceBFactory{})
}

To sum up

Usually, if the service is small, it maybe not worth to approach a library for dependency injection but you should always keep it in mind and always starting with the pattern. Otherwise, it will become a mess when the service scales to next level. It will be also easy to migrate if needs to.

Create a code generator with protoc

· 5 min read

If your system happens to have a microservices architecture, you may find it repetitive to scaffold new services or to add new endpoints. In such scenarios, Protocol Buffer emerges to be an excellent choice for writing API contracts and generating code. However, you sometimes may want to add a custom code but you're not sure how to to that. Actually, it's quite simple with protoc though. In this post, we will walk through how to create a code generator with protoc.

Getting started

Before getting started, make sure you have Go environment ready. It's also important to be familiar with Google protobuf. I highly recommend running through the Go tutorial to generate Go code for any given protocol definition if you're new to the stuff.

While running through the tutorial, you may notice that we need install protoc-gen-go in order to generate Go code. Yep, protoc-gen-go is a plugin of the protoc command written by Google and you can check out its source code. In this guide, we will write a similar program called protoc-gen-my-plugin to generate custom code. As the command to execute protoc-gen-go is like below:

protoc --proto_path=. --go_out=. --go_opt=paths=source_relative foo.proto

The command to execute our plugin will be like:

protoc --proto_path=. --my-plugin_out=. --my-plugin_opt=paths=source_relative foo.proto

In the above command, my-plugin_out specifies the output directory of the generated files and it also tells protoc to use protoc-gen-my-plugin to generate the custom code. my-plugin_opt specifies the option for running the plugin.

Okay, let's write a simple program to test it. At first, I simply use these commands to set up a new Go project:

mkdir protoc-gen-my-plugin
cd protoc-gen-my-plugin
go mod init github.com/bongnv/protoc-gen-my-plugin
export PATH=$PATH:"$(pwd)" # so protoc can find our plugin

Next, create a simple main.go to just print a log:

package main

import (
"log"
)

func main() {
log.Println("protoc-gen-my-plugin is called")
}

We also need to draft a simple foo.proto as an example:

syntax = "proto3";

message Foo {}

After all, we can try our plugin and the log should be printed like below:

% go build && protoc --proto_path=. --my-plugin_out=. --my-plugin_opt=paths=source_relative foo.proto

2021/05/08 16:28:31 protoc-gen-my-plugin is called

Write logic to generate code

As protoc can find our plugin and execute it, now we need to write logic to generate code. It could be a New method for each message to create a new instance of that message to be used as a factory function. The generated code would look simple like below:

// New creates a new instance of Foo.
func (Foo) New() *Foo {
return &Foo{}
}

By implementation, protoc communicates with our plugin via stdin and stdout. However, protogen.Options already abstracts that logic so we can just use it to simplify the code by providing a callback function:

func main() {
log.Println("protoc-gen-my-plugin is called")
protogen.Options{}.Run(func(plugin *protogen.Plugin) error {
for _, file := range plugin.Files {
if !file.Generate {
continue
}

if err := generateFile(plugin, file); err != nil {
return err
}
}

return nil
})
}

The callback function will take *protogen.Plugin as an input that should contain all necessary information to generate code. In the above example, I skip all proto files that don't require code generation and I call generateFile to generate a source file given a parsed proto file.

Let's look at the function generateFile:

func generateFile(p *protogen.Plugin, f *protogen.File) error {
// Skip generating file if there is no message.
if len(f.Messages) == 0 {
return nil
}

filename := f.GeneratedFilenamePrefix + "_my_plugin.pb.go"
g := p.NewGeneratedFile(filename, f.GoImportPath)
g.P("// Code generated by protoc-gen-my-plugin. DO NOT EDIT.")
g.P()
g.P("package ", f.GoPackageName)
g.P()

// generate factory functions
for _, m := range f.Messages {
msgName := m.GoIdent.GoName
g.P("// New creates a new instance of ", msgName, ".")
g.P("func (", msgName, ") New() *", msgName, " {")
g.P("return &", msgName, "{}")
g.P("}")
}

return nil
}

I use f.Messages to retrieve the list of parsed messages from the proto file. We can also use f.Services for the list of parsed services if we need to scaffold a service. NewGeneratedFile here is used to create a new generated file and g.P is used to add a new line into the file. Check their API, there are lots of useful stuff.

Summary

Writing a protoc plugin isn't as complex as it sounds. With the help of protogen.Options and protogen.Plugin we can easily access the parsed information of proto files. And from that, we should be able to generate codes to improve the productivity of the development.

From my experience, it could be handy to scaffold a service including both gRPC and HTTP. It could be used to generate validator code, data entity or DAO to interact with the storage layer.

References:

Safely Use Go Context to Pass Data

· 4 min read

Go context is an effective way to pass request-scoped data like requestID, authenticated users, locale information or logger. Instead of having a complex list of parameters in a function, we can use context.Context to pass data and then to simplify the function. Below is a function with and without using context to pass data.

// the function must take all request-scoped data.
func handlerLogic(w http.ResponseWriter, r *http.Request, requestID string, *user *User, locale string, logger Logger) {
// some handler magic logic
}

and

// the function uses context.Context to store data; therefore, function signature is much simpler.
// ctx.Value is used to retrieve data when needed.
func handlerLogic(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
requestID, ok := ctx.Value(ctxKeyRequestID)
// some handler magic logic
}

With context.Context, function signature becomes simpler and consistent. Not only is it easier to read, we can also write middlewares to add more functionalities. Below is an example of a middleware to inject requestID into the context:

func addRequestID(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := context.WithValue(r.Context(), ctxKeyRequestID, uuid.NewString())
next.ServeHTTP(w, r.WithContext(ctx))
})
}

However, such convenience doesn't come with zero downside. Firstly, the dependency and input of a function become hidden. For example, a generic function func handlerLogic(w http.ResponseWriter, r *http.Request) isn't enough to tell whether requestID is needed for the internal logic. We'll need to read requestID, ok := ctx.Value(ctxKeyRequestID) to know that requestID is needed and is extracted from the context. Such implicity isn't good for maintenance and is prone to error. Engineers will need to read the whole function in order to be aware of such dependencies.

The best solution is not to use context at all. However, if there is no better way, the data stored in context should be minimal and commonly accepted. My suggestion to mitigate that issue is to have descriptive comments to explicitly explain the dependency and input. The comment should include all information that is retrieved from the context.

Another compromise when using Go context is the missing of type information. Go context gives up type checking in order to gain the ability to write more versatile code. context.WithValue() and context.Value use interface{} to support different types of data and therefore there is no type checking at compile time. This is unclear to determine the what data is required and what data is expected. For example, ctx.Value(ctxKeyRequestID) can return a string or UUID object and which is unclear when reading the code.

One approach that I usually use is to implement a wrapper library to inject and retrieve data from context. First, it will centralize all pieces of information that we may expect from context. Secondly, it allows us to enforce what data should be injected and expected from context. Take requestID as an example, I would implement a wrapper like below:

package contextutil

type ctxKey int

const (
_ ctxKey = iota
ctxKeyRequestID
)

// WithRequestID creates a new context that has requestID injected.
func WithRequestID(ctx context.Context, requestID string) context.Context {
return context.WithValue(ctx, ctxKeyRequestID, requestID)
}

// RequestID tries to retrieve requestID from the given context.
// If it doesn't exist, an empty string is returned.
func RequestID(ctx context.Context) string {
if requestID, ok := ctx.Value(ctxKeyRequestID); ok {
return requestId.(string)
}

return ""
}

WithRequestID and RequestID are provided to ensure that requestID will always be a string. contextutil is also a good place to list all common information which could be found from the context. Once it becomes a convention, it's less likely to be surprised about what data we can retrieve from context hence to make the codes more predictable.

In general, I like Go context idea to pass data through the program while processing a request which helps to make codes cleaner. However, I find it useful, handy and prone to errors at the same time. Although having a wrapper somehow limits the downside, I believe the best solution is still to limit the use of Go context to pass data.

Prepare for system design interviews

· 4 min read

Many people are afraid of system design interviews as normally there's no certain pattern to prepare for and questions are open-ended, unpredictable, and quite flexible. Therefore, it's hard to find a correct answer which makes the preparation process even harder.

In this post, I'll cover some tips that would help you to prepare and potentially impress your interviewers.

Clarify questions

Usually, the question is given without detailed information and this is intended in order to test the ability to work with ambiguity. You should ask for further clarifications to avoid solving a wrong problem. Sometimes, it's OK to give an common-sense assumption; however make sure to inform the interviewer that you're giving an assumption in order to solve the problem.

Give an outline

Like writing engineering specifications, I find it useful to have an outline of the design. Not only does it help you to structure your thought, but it also helps to align the expectation with the interviewer. In an interview, I usually recommend this below structure:

  • High-level architecture
  • Data model
  • Choice of techniques & trade-offs
  • Scalability concerns
  • Availability concerns
  • Security concerns

Starting with high-level architecture will give an overview of your design. It then will allow the interviewer to follow your idea easier. And then, choice of technologies & trade-offs is the place to show your experience and your knowledge of different types of technologies. Security concerns is not less important at all especially when people are more concerned about their data privacy these days. A hack or a breach will not only be costly in terms of economic but also company reputation.

If you have extra time, it's also good to talk about:

  • Fault tolerance
  • Deployment
  • Rollout
  • Configurations

Start with a simple solution

In my experience, lots of candidates start with a sophisticated solution which ended up confusing themselves let alone the interviewer. They usually struggles to implement it and to explain the solution to the interviewer. Therefore, I always keep reminding my candidates to start with a simple approach first and then expand to solve a complex problem. This approach is actually practical when we start with MVP and then adding more functionalities later.

Just to be clear, I don't suggest ending with a simple solution as the world problem is usually more complicated due to business constraints. However, evolving from a simple solution to a more complex one could demonstrate the critical thinking skill much better.

Give reasons always

One mistake that I usually see from candidates is to give a technical choice without giving any specific reason. An example is that one adds a cache and saying that it's used for caching data without considering whether there is a performance benefit or experience impact. As a result, this won't help the interviewer to address your experience nor knowledge on the given stack. Furthermore, it may create a bad impression that you're naming the stack and just copying the answer from somewhere else.

Consider trade-off

Selecting the right stack or architecture at the beginning is good but it's always better if you can provide the comparison with any alternative solution and the trade-off. I believe this should show off your experience and deep understanding of the stack and increase your score significantly. For example, instead of just saying that using a queue for an asynchronous job, you can compare it with a synchronous solution and share the compromise that has to be made.

Those are some tips that I find useful. Of course, it's important to have a wide range of knowledge about different types of technology stacks. Having research about the company in advance is also good to understand the tech-stack. Happy practicing and good luck with your interview!

Buying ADA (Cardano) in Singapore

· 2 min read

In order to take the first step into the Cardano world, you will need to buy some ADA. Unfortunately, buying ADA in Singapore is not as straightforward as it can be. It took me a while to find out this method of using Binance.

Opening account in Binance

First, you need to open an account with Binance. I would recommend to use this invitation link, it will give you and me 10% of transaction fee each as commission.

Once you’ve signed up, you will need to verify the account in order to see the P2P trading option. As it's automated, the process is quite fast. It only took me a couple of minutes. After your account is verified, the next step is to buy some USDT.

Buying USDT

Next, you need to deposit in order to buy ADA. I chose P2P trading option to buy some USDT via SGD because it's fast with lower transaction fee. From my experience, a transaction usually takes 5-15 minutes. I choose USDT over BTC as there are more P2P sellers and the fee is also lower.

As you will pay directly to the bank account/PayNow of the seller, it's better to choose those who are verified and have the highest number of completed orders for safety. It's also wise to make sure the seller is online before proceeding to the transfer.

After having USDT, you need to transfer it to Spot Wallet in order to trade for ADA.

Buying ADA

Once having USDT, you can go directly to Markets to trade for ADA. I usually use the Market order so the transaction can be done instantly. You can also use the Limit order for a better price.

From here, you just need to transfer your ADA to an ADA wallet such as Daedalus or Yoroi for safety. You might use a Ledger Nano for cold storage. I won't recommend keeping your ADAs on the exchange. Remember not your keys, not your coins!

Release Go modules automatically

· 5 min read

If you happen to have some awesome Golang codes, you probably want to share it with others. Once sharing it, you will need to turn it into a Go module and to version it. Therefore it can be managed easier and to be more friendly to users.

Well, releasing a Go module isn't so difficult but neither is it straightforward. Moreover, you're an engineer so you want to automate everything including this manual work.

Below is my experience when trying to automate the process of releasing Go modules via semantic-release and Github Actions. It wasn't so smooth but life is much easier after that.

Prerequisites

Commit message format

Our commit messages format must follow a convention that is understood by semantic-release. By default, semantic-release uses Angular Commit Message Conventions but it can be changed by the configuration.

Here is an example of the release type that will be done based on commit messages:

Commit messageRelease type
fix(pencil): stop graphite breaking when too much pressure appliedPatch Release
feat(pencil): add 'graphiteWidth' optionMinor (Feature) Release
perf(pencil): remove graphiteWidth option

BREAKING CHANGE: The graphiteWidth option has been removed.
The default graphite width of 10mm is always used for performance reasons.
Major (Breaking) Release

Automated tests

We all want our modules to be released with its best in quality. The only way to achieve that is to take testing seriously. Since releasing is done automatically, so be testing. Therefore, I would assume the CI is already implemented with proper automated testing. And we will only trigger the release step if all test cases are passed.

Setup semantic-release

Configuration

For Node modules, we can straightly use the default configuration. For Go modules, it requires some modifications as Golang doesn't use NPM repository. Thus, we will need to add .releaserc.json, which is semantic-release's configure file, to the root folder in our repository:

{
"branches": [
"main",
{
"name": "beta",
"prerelease": true
}
],
"plugins": [
"@semantic-release/commit-analyzer",
"@semantic-release/release-notes-generator",
"@semantic-release/github"
]
}

There are a few things I would like to highlight here:

  • @semantic-release/npm is removed from the default plugins config because we don't need to publish our Go module to NPM repo. It's obvious, right?
  • main is used instead of master which is the release branch by default. For those who may not know, Github recently renames the default branch from master to main. Reference.
  • beta is used as a pre-release branch when we're in a heavy development phase with frequent breaking changes. To learn more about release workflow, you can look into semantic-release wiki. I'm really impressed by how well it's documented.

For generating changelog, we will need to include two more plugins:

I don't see CHANGELOG.md scale well and we already have git history so not including these steps makes things easier.

Github Actions

Then, we add a new job in Github Actions workflow for releasing. It should look simple like this:

jobs:
test:
# an amazing test configs
release:
name: Release
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: 12
- name: Release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: npx semantic-release
  • needs: test means the release job is only executed if the test job is successful.
  • npx semantic-release is the command to execute semantic-release. GITHUB_TOKEN will be needed for tagging.
  • semantic-release is able to detect if the job is triggered by pull_request and ignore it. Therefore, we won't need to worry about skipping the job from pull requests.

Pre-releases

Sometimes, the module is in a heavy development and it's expected to have multiple breaking changes. In this situation, a pre-release version like v1.0.0-beta.12 is needed before a stable one. semantic-release supports this pretty well. All we need to do is:

  • Create a beta branch and commit your changes here. Relevant commits in this branch will trigger semantic-release to create a new pre-release.
  • Once the module becomes stable, we merge it to the main branch. The merge should trigger semantic-release to create a new stable version.

Summary

semantic-release is a handy library. It can free your hands from release modules in a timely manner. One downside is that a buggy module can be released if automated tests are not properly implemented.

semantic-release is not the only package out there to automate the process. Another alternative written in Golang is go-semantic-release. go-semantic-release doesn't have as many plugins as semantic-release but it works better with Go module. Especially, it allows starting versioning with 0.x.x which is a Go's convention for pre-releases.

References