Essential Go Interview Questions to Boost Your Job Prospects
A comprehensive collection of Go interview questions covering fundamentals, concurrency, runtime, microservices, Docker, Redis, and MySQL, designed to help developers deepen their knowledge and confidently tackle technical interviews for backend engineering roles.
Go Basics
Why choose Go? It is a statically‑typed compiled language with fast build times, native support for concurrency (goroutines and channels), a simple standard library, built‑in garbage collection, and excellent cross‑compilation support.
Core data types include bool, numeric types (int, int8/16/32/64, uint, float32/64, complex64/128), string, array, slice, map, struct, pointer, function, interface and channel.
Package is a directory containing Go source files; it provides namespace isolation and is the unit of compilation and import.
Type conversion uses the syntax T(v). Example converting an int to float64:
var i int = 42
var f float64 = float64(i)Goroutine is a lightweight thread managed by the Go runtime. It can be stopped by returning from the function, cancelling a context, or closing a channel that the goroutine is listening on.
Runtime type inspection uses the reflect package:
t := reflect.TypeOf(v)
fmt.Println(t.Kind())Interface relationships – one interface can be a subset of another; a type implements an interface implicitly by providing the required methods.
sync.Mutex provides mutual exclusion. It has two states (locked/unlocked) and can be used to protect critical sections.
Channels are typed conduits for communication between goroutines. They can be buffered or unbuffered. Buffered channels store a fixed number of elements; unbuffered channels synchronize sender and receiver.
cap returns the capacity of slices, arrays, and channels.
new vs make – new(T) allocates zeroed storage for type T and returns a pointer; make initializes slices, maps and channels and returns the ready‑to‑use value.
Formatting functions – fmt.Printf writes to standard output, fmt.Sprintf returns the formatted string, fmt.Fprintf writes to an arbitrary io.Writer.
Arrays vs slices – arrays have fixed length as part of their type; slices are descriptors (pointer, length, capacity) that reference an underlying array.
Value vs reference passing – Go passes arguments by value. Passing a pointer or a slice (which contains a pointer) allows the callee to modify the underlying data.
Slice growth algorithm – when a slice exceeds its capacity, the runtime allocates a new underlying array (usually 2× the old capacity) and copies the elements.
Defer statements are stacked LIFO and executed after the surrounding function returns, useful for resource cleanup.
Slice implementation – a slice header ( ptr, len, cap) points to an array. Expanding a slice may allocate a new array and copy data, which can invalidate previous references.
Map implementation – a hash table with buckets. When the load factor exceeds a threshold, the map grows by allocating a larger bucket array and rehashing entries.
Go Concurrency
sync.Mutex states – unlocked, locked, and (in Go 1.9+) contended (when a goroutine is waiting).
Normal vs starvation mode – normal mode favors the owner; starvation mode (after a threshold of lock hand‑offs) gives priority to waiting goroutines to avoid indefinite postponement.
Spin‑lock condition – a mutex may spin briefly on multi‑core CPUs when the lock holder is running on another processor, reducing context‑switch overhead.
sync.RWMutex – allows multiple readers or a single writer. Writers acquire an exclusive lock; readers acquire a shared lock.
Precautions – never hold an RWMutex for long, avoid lock promotion (reader to writer) and be aware of potential writer starvation.
sync.Cond – a condition variable built on a Locker. Use c.Wait() to block, c.Signal() to wake one waiter, and c.Broadcast() to wake all.
sync.WaitGroup – tracks a set of goroutines. Call Add(n), then each goroutine calls Done(); Wait() blocks until the counter reaches zero.
Implementation principle – WaitGroup uses an atomic counter; Done decrements it, and Wait spins or parks until the counter is zero.
sync.Once – guarantees that a function is executed only once, even when called from multiple goroutines.
Atomic operations – provided by sync/atomic (e.g., atomic.AddInt64, atomic.CompareAndSwapUint32). They operate on aligned 32‑ or 64‑bit values without locks.
CAS (Compare‑And‑Swap) – an atomic primitive that updates a value only if it matches an expected old value, forming the basis of lock‑free algorithms.
sync.Pool – a cache of temporary objects to reduce allocation pressure; objects are reclaimed by the GC when not in use.
Go Runtime
Goroutine definition – a function executing concurrently, managed by the Go scheduler.
GMP model – stands for G oroutine, M achine (OS thread), P rocessor (logical CPU). The scheduler maps many goroutines (G) onto a smaller set of OS threads (M) using P as a run‑queue token.
Pre‑Go 1.0 scheduling – a simple cooperative model where the runtime only switched goroutines at explicit points (e.g., channel ops, syscalls).
Current GMP workflow – P holds a run‑queue of Gs; an M executes the G at the head of the queue. When a G blocks, its M is detached and may be reassigned to another P.
Work stealing – idle P’s steal Gs from other P’s run‑queues to keep all CPUs busy.
Hand‑off mechanism – when a goroutine blocks on I/O, the M is parked and the G is placed on a global queue; another M can pick it up.
Cooperative vs pre‑emptive scheduling – Go uses cooperative scheduling for most code but employs pre‑emptive pre‑emption (signal‑based) to interrupt long‑running CPU‑bound goroutines.
sysmon – a background thread that monitors P and M states, performs garbage collection, and forces pre‑emptive pre‑emption.
Three‑color marking GC – objects are white (unreachable), gray (reachable but not scanned), or black (scanned). The collector repeatedly scans gray objects, turning them black, until no gray remain.
Write barriers – inserted before pointer writes to keep the GC’s view of the heap consistent. Types include insertion, deletion, general, and hybrid barriers.
GC trigger – runs when the heap grows beyond a factor of the live data size (default 6×) or when a manual runtime.GC() is called.
GC tuning – adjust GOGC environment variable, reduce allocation rate, and avoid large object churn to improve pause times.
Microservices
Definition – an architectural style that structures an application as a collection of loosely coupled services, each owning its own data and deployed independently.
Benefits – independent scaling, fault isolation, technology heterogeneity, faster deployment cycles.
Key characteristics – bounded context, API‑first communication (often REST/HTTP or gRPC), decentralized data management, automated DevOps pipelines.
Design best practices – keep services small, define clear contracts, use domain‑driven design (DDD) to align services with business domains, enforce strong cohesion and low coupling.
DDD concepts – ubiquitous language, bounded context, aggregates, and entities help model complex domains.
REST vs RPC – REST is resource‑oriented, stateless, cacheable; RPC (e.g., gRPC) is operation‑oriented, supports streaming and strong typing.
Testing types – unit tests, contract tests, integration tests, end‑to‑end tests, and chaos/ resilience testing.
Docker (Container Technology)
Docker image vs container – an image is a read‑only layered filesystem; a container is a runnable instance with its own writable layer.
Dockerfile basics – FROM, RUN, COPY (copies files from build context), ADD (adds files and can unpack archives or fetch URLs), CMD, ENTRYPOINT, ONBUILD (defers instructions to child images).
Layering – each Dockerfile instruction creates a new immutable layer; layers are cached and shared across images, reducing build time.
Container lifecycle states – created, running, paused, stopped, exited, and dead. docker ps -a shows status.
Docker Swarm – native clustering and orchestration, providing service discovery, load balancing, and rolling updates.
Monitoring – use docker stats, cAdvisor, Prometheus + node_exporter, or third‑party platforms.
Common pitfalls – container leaks (unremoved stopped containers), mounting volumes incorrectly, and using privileged mode unnecessarily.
Running on non‑Linux hosts – Docker Desktop provides a lightweight VM on macOS/Windows; native Linux containers run directly on the host kernel.
Redis
Data structures – strings, lists, sets, sorted sets, hashes, streams, bitmaps, hyperloglogs, and geospatial indexes.
Persistence – RDB snapshots (periodic) and AOF (append‑only file) which can be rewritten; hybrid persistence combines both for durability and fast restarts.
Eviction policies – noeviction, allkeys-lru, allkeys-random, volatile-lru, volatile-random, volatile-ttl.
Pipelines – batch multiple commands to reduce round‑trip latency:
pipe := rdb.Pipeline()
pipe.Set(ctx, "key1", "val1", 0)
pipe.Incr(ctx, "counter")
_, err := pipe.Exec(ctx)Cluster architecture – data sharded across 16384 hash slots; each node owns a subset of slots. Replication is master‑slave; each master has one or more replicas for failover.
Write loss scenarios – if a master fails before its replicas have persisted the write, the write may be lost. Using WAIT or synchronous replication mitigates this.
Transactions – MULTI/EXEC block; commands are queued and executed atomically. WATCH provides optimistic locking.
Distributed locks – implement with SET key value NX PX ttl (Redlock algorithm) to acquire a lock with expiration.
Memory optimisation – use appropriate data types (e.g., hashes for many small fields), enable hash-max-ziplist-entries, set maxmemory, and monitor INFO memory.
MySQL
Normalization – First, Second, and Third Normal Forms eliminate redundancy and ensure functional dependency.
System privilege tables – mysql.user, mysql.db, mysql.tables_priv, etc., store global, database‑level, and table‑level permissions.
Binlog formats – STATEMENT (SQL statements), ROW (row changes), and MIXED (auto‑select). ROW provides higher fidelity for replication.
Storage engines – InnoDB (transactional, row‑level locking, MVCC) vs MyISAM (non‑transactional, table‑level locking, faster reads). InnoDB is default for most workloads.
Indexes – B‑tree primary and secondary indexes; clustered index is the primary key in InnoDB, storing row data with the index.
Isolation levels – READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ (default), SERIALIZABLE. They control visibility of uncommitted changes and phantom reads.
CHAR vs VARCHAR – CHAR is fixed‑length, padded with spaces; VARCHAR is variable‑length with a length prefix, more space‑efficient for varying data.
Left‑most prefix principle – MySQL can use an index only for the leftmost columns of a composite index.
Monetary values – store using DECIMAL(precision, scale) to avoid floating‑point rounding errors.
When indexes hurt – high write‑heavy tables, low‑selectivity columns, or queries that cannot use the leftmost prefix.
Handling millions of rows – partition tables, proper indexing, query optimisation, and using covering indexes to reduce I/O.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Go Development Architecture Practice
Daily sharing of Golang-related technical articles, practical resources, language news, tutorials, real-world projects, and more. Looking forward to growing together. Let's go!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
