Blogs

December 10, 2025

Why We Built Our Own Message Queue

Instead of using RabbitMQ or Kafka, we built a simple custom message queue in Go...

When building a real-time chat application, you need a way to handle asynchronous tasks - like saving messages to the database without blocking the user. Most people reach for RabbitMQ or Kafka, but we decided to build our own.

The Problem

We needed to persist chat messages to PostgreSQL, but we didn't want the HTTP request to wait for the database write to complete. We also needed to handle failures and retries.

Our Solution

We built a simple pub/sub system using Go channels. Here's the core:

type MainMQ struct {
    mainMq map[string]chan *Channel
    mu     sync.RWMutex
    pub    chan PubStruct
    FailedResult chan ResultStruct
    retriesLimit int
}

Topics are created dynamically, workers listen on channels, and failed jobs get republished with a retry limit.

How It Works

When a user sends a message via WebSocket, we immediately deliver it to the recipient through our Hub, then publish the job to our MQ for database persistence. Workers pick up the job and write to PostgreSQL. If it fails, we retry up to N times before giving up.

func (s *chatService) StartWorkerForAddPrivateMessage(channel chan *mq.Channel) {
    for chen := range channel {
        msg := chen.Msg.(chatmodel.MessageMetaData)
        _, err := s.chatRepo.AddMessagePrivate(payload)
        if err != nil {
            chen.RetriesCount++
            s.mq.Republish(chen, chen.RetriesCount)  // retry
            continue
        }
    }
}

Why Not RabbitMQ?

At our scale (just two developers, early stage), adding another infrastructure component complicates things. We didn't need the advanced features of RabbitMQ - just a simple way to queue jobs. Our custom MQ is about 150 lines of code and does exactly what we need.

What We'd Do Different

Looking back, we might have used an existing library or just goroutines with channels. Also, we started with in-memory only - if the server restarts, pending jobs are lost. Now we're considering migrating to Redis for persistence.

Lesson: Don't over-engineer early. Start simple, add complexity when you need it.

Click to collapse

January 5, 2026

Building a WebSocket Hub in Go

Deep dive into how we handle real-time connections for our chat app...

The heart of any real-time chat app is how you manage WebSocket connections. In this post, we'll walk through our Hub implementation - the component that manages all active connections and routes messages between users.

The Hub Pattern

We use the standard "Hub" pattern popular in Go WebSocket applications. The Hub maintains:

  • Clients - Map of userID to active WebSocket connection
  • UsertoChannel - Which chat rooms is each user in?
  • ChatToUser - Which users are in each chat room?
type Hub struct {
    UsertoChannel map[string]map[string]bool
    ChatToUser    map[string]map[string]bool
    Clients       map[string]*Client
    Register      chan *Client
    Unregister    chan *Client
    Broadcast     chan interface{}
    JoinChan      chan GroupActionInfo
    LeaveChan     chan GroupActionInfo
}

The Client

Each connected user gets a Client struct with their WebSocket connection and a Send channel. We use the classic "pump" pattern - one goroutine reads from WebSocket, another writes to it.

type Client struct {
    Hub    *Hub
    Conn   *websocket.Conn
    Send   chan []byte
    UserID uuid.UUID
}

Message Flow

When a user sends a message:

  1. ReadPump reads JSON from WebSocket
  2. Message goes to Hub's Broadcast channel
  3. Hub determines target users (private vs group)
  4. Hub writes to each recipient's Send channel
  5. WritePump sends data to WebSocket

Handling Disconnections

When a client disconnects, we need to clean up everywhere - remove from Clients map, remove from all ChatToUser maps, and clean up UsertoChannel. This was tricky to get right!

case client := <-h.Unregister:
    delete(h.Clients, client.UserID.String())
    if chatIds, ok := h.UsertoChannel[client.UserID.String()]; ok {
        for chat := range chatIds {
            delete(h.ChatToUser[chat], client.UserID.String())
        }
    }
    delete(h.UsertoChannel, client.UserID.String())

Lessons Learned

The Hub can become a bottleneck as connections grow. We're considering sharding connections across multiple Hub instances. Also, we learned the hard way about proper mutex usage - always lock before modifying maps!

Click to collapse

February 1, 2026

Using Gob Encoding for Message Caching

Why we chose Go's binary encoding over JSON for our Redis cache...

Caching is crucial for our chat app - we don't want to hit the database every time someone loads a conversation. But how we store cached data matters. Here's why we switched from JSON to Go's gob encoding.

The Problem

Initially, we stored cached messages as JSON strings in Redis. It worked, but there was overhead - JSON parsing takes CPU, and the string representation is larger than necessary.

Enter Gob

Go's encoding/gob package provides efficient binary encoding. It's native to Go, requires no external dependencies, and produces compact output. Here's our encoding/decoding:

func marshallBinary(payload interface{}) ([]byte, error) {
    var buff bytes.Buffer
    enc := gob.NewEncoder(&buff)
    err := enc.Encode(payload)
    return buff.Bytes(), err
}

func unmarshalBinary(bytesArray []byte) (*chatmodel.MessageCache, error) {
    var payload chatmodel.MessageCache
    buff := bytes.NewBuffer(bytesArray)
    dec := gob.NewDecoder(buff)
    err := dec.Decode(&payload)
    return &payload, err
}

Results

In benchmarks, gob encoding was about 3-5x faster than JSON for our message structures. Memory usage dropped too - binary data is more compact than JSON strings.

The Gotchas

Gob isn't all sunshine:

  • Not cross-language - Can't read gob data from Python or Java. Fine for internal caching.
  • Type registration - Gob needs to know about your types. Works seamlessly within the same Go program.
  • No field tags - Unlike JSON, gob serializes all exported fields. No skipping or renaming.

When to Use What

Use JSON when: sharing data across services, storing in external systems, or need human readability.

Use gob when: Go-to-Go communication, performance matters, and the data doesn't leave your system.

For our Redis cache, gob was the right choice. But we'll keep JSON for our WebSocket messages - browser compatibility matters!

Click to collapse

February 15, 2026

Implementing "User is Typing..."

A small feature with interesting implementation details...

You see it in every chat app - those little "User is typing..." indicators. Seems simple, right? Just send a message when someone types. But there's more to it than meets the eye.

The Naive Approach

First implementation: send a "typing" event every time the user presses a key. This works but creates problems - too many WebSocket messages, potential flooding.

Our Solution

We handle typing events through our existing Hub infrastructure. When a client sends a typing event, it goes through the same broadcast system as messages:

case "Typing":
    var p InCommingEventForTyping
    if err := json.Unmarshal(eventMap.Payload, &p); err != nil {
        continue
    }
    c.Hub.Broadcast <- Event{
        Event: "Typing",
        Payload: TypoEvent{
            FromID: c.UserID.String(),
            ToID:   p.ToID,
        },
    }

The Client Side

On the receiving end, we show the typing indicator and set a timeout. If no new typing event arrives within 3 seconds, we hide the indicator. This handles disconnections gracefully - when someone stops typing or leaves, the indicator disappears automatically.

Optimizations We Made

  • Debouncing - Only send typing events after 300ms of no typing (prevents flooding)
  • Rate limiting - Maximum one typing event per second per user
  • Auto-timeout - Indicators disappear after 3-5 seconds of inactivity

What We Learned

Even "simple" features need careful design. The typing indicator seems trivial, but getting it right (not too spammy, not too delayed) took iteration. Now it feels polished and responsive.

Click to collapse