Notes

Atom Feed

Go Compression Benchmark Results

March 19, 2026

This is an LLM-generated post that summarizes benchmarks used to choose compression libraries for the localtimezone lat/lng -> timezone library

A benchmark of pure-Go compression libraries against real-world binary data: H3 timezone cells from localtimezone, a library that needed fast and efficient decompression of an ~8.5 MB binary file. All ten libraries tested are pure Go with no CGo dependencies, making the results directly applicable to any Go project that needs cross-platform compression.

The benchmark code is available at github.com/albertyw/go-compression-benchmark. Full per-run result tables: data.h3 results · data_mock.h3 results.


Key Findings

Best compression ratio: XZ/LZMA2 — but at a steep cost

XZ achieves a 10.33x ratio on the 8.5 MB file, shrinking it to 848 KB. The catch: compression takes 450ms and uses 64 MB of memory with nearly a million allocations. Decompression is more reasonable at 97ms. If you compress offline and only pay the decompression cost at runtime, this can work — but the memory spike and allocation count during compression rule it out for any hot path.

Best balance: Zstd

Zstd is the standout. At SpeedFastest it achieves a 5.78x ratio in just 31ms with only 16 MB of memory and 7 allocations to decompress. Stepping up to SpeedBestCompression improves the ratio to 6.67x at the cost of 357ms compression time. Decompression is consistently fast at ~13ms across all levels. For most use cases, Zstd SpeedFastest or SpeedDefault is the right call.

Fastest compression throughput: pgzip and S2

pgzip (parallel gzip) compresses the 8.5 MB file in 5.6ms at BestSpeed by using multiple CPU cores. S2 Better is close at 4.9ms single-threaded. Both pay for this speed in ratio: pgzip gets ~5.16x, S2 Better only 2.40x. These are strong choices for pipelines where compression is on the critical path and throughput matters more than size.

Fastest decompression: Snappy and S2

Snappy decompresses in 4.6ms with just 1 allocation — the lowest overhead of any library. S2 Best is similarly fast at 5ms. These are the right choices if decompression latency is the primary concern and a lower compression ratio is acceptable.

Brotli: excellent ratio, terrible compression speed

Brotli Best (11) achieves 7.79x — the second-best ratio — but takes 10.65 seconds and 802 MB of memory to compress. Even Default (6) takes 302ms. Brotli is designed for serving pre-compressed static assets, not on-the-fly compression. Use it only when you compress once and serve many times.

Gzip stdlib: the safe default

If you want zero dependencies beyond the standard library, compress/gzip at Default gives a solid 5.36x ratio in 197ms. The klauspost drop-in replacement is faster (35ms at Default) with identical output format, making it a worthwhile swap whenever gzip interoperability is required.


Summary Table (8.5 MB file)

Library Level Ratio Compress Decompress Notes
XZ/LZMA2 10.33x 450ms 97ms Best ratio, slow
Brotli Best (11) 7.79x 10.65s 36ms Offline use only
Zstd SpeedBestCompression 6.67x 357ms 13ms Best ratio+speed tradeoff
Zstd SpeedFastest 5.78x 31ms 13ms Best all-around
Gzip stdlib Default 5.36x 197ms 24ms Zero dependencies
Gzip klauspost Default 5.26x 35ms 15ms Fast stdlib replacement
pgzip BestSpeed 5.16x 5.6ms 17ms Fastest compress (parallel)
S2 Better 2.40x 4.9ms 7ms Fast, low ratio
Snappy 2.19x 7.6ms 4.6ms Fastest decompress

When to use each library

  • Zstd — the default choice for new projects. Excellent ratio across all speed levels, fast decompression, low allocations, and a stable RFC-standardized format.
  • XZ/LZMA2 — when ratio is paramount, compression is offline, and you can tolerate slow compression and high memory usage.
  • Brotli — HTTP serving of static assets (CSS, JS, fonts). Pre-compress and cache; never compress on-the-fly.
  • Gzip stdlib — when you need zero non-stdlib dependencies or interoperability with existing gzip consumers. Swap to klauspost for free speed gains.
  • pgzip — when you need gzip output but compression throughput is a bottleneck; scales with core count.
  • LZ4 — when decompression throughput is the absolute priority and you’re operating at memory-bandwidth speeds. Common in storage systems and databases.
  • Snappy / S2 — lightweight, fast decompression with minimal allocations. S2 is strictly better than Snappy in pure-Go contexts.

Library Backgrounds

gzip / DEFLATE

DEFLATE is a lossless compression algorithm invented by Phil Katz in 1993 and formally specified in RFC 1951 (1996). It combines LZ77 — which replaces repeated byte sequences with back-references using a sliding window — and Huffman coding, which assigns shorter bit strings to more frequent symbols. The gzip file format was created by Jean-Loup Gailly in 1992 as a patent-free replacement for the Unix compress utility, whose LZW algorithm was encumbered by Unisys patents. DEFLATE became the backbone of a generation of internet infrastructure: it is the compression algorithm inside ZIP archives, PNG images, TLS connections, and HTTP content-encoding. The Go standard library provides compress/gzip and compress/flate; the klauspost/compress library offers drop-in replacements with assembly-optimized paths on amd64 that are substantially faster.

Zstandard (Zstd)

Zstandard was developed by Yann Collet at Facebook (now Meta) and open-sourced in August 2016. Its goal was to be a modern replacement for zlib that improves on all metrics simultaneously — compression speed, decompression speed, and ratio. Like DEFLATE it uses LZ77-style dictionary matching, but pairs it with a larger search window and a fast entropy coder based on Finite State Entropy (FSE), a variant of Asymmetric Numeral Systems (ANS). The algorithm was standardized as RFC 8478 in 2018. Adoption has been sweeping: Facebook uses it across its entire data infrastructure, the Linux kernel adopted it for module and filesystem compression, Fedora switched RPM package compression to Zstd in 2019, and Chrome and Firefox both added Content-Encoding: zstd HTTP support in 2024.

Brotli

Brotli was created at Google by Jyrki Alakuijala and Zoltán Szabadka in 2013, originally to reduce the size of WOFF2 web font transfers. Unlike Google’s earlier Zopfli (a superior DEFLATE compressor), Brotli introduced an entirely new format using a modern LZ77 variant, Huffman coding, second-order context modeling, and a large static dictionary of common words drawn from web content. It was generalized for HTTP content-encoding and standardized as RFC 7932 in 2016. Brotli is today the dominant HTTP compression algorithm: all major browsers support it, and Cloudflare, Akamai, AWS CloudFront, Nginx, and Apache all serve it. The WOFF2 font format — which depends on Brotli — received a Technology and Engineering Emmy Award in 2021. Its extreme compression ratios come at a steep compression-time cost, making it best suited for pre-compressing static assets offline rather than on-the-fly.

Snappy

Snappy (originally called “Zippy” internally) was developed at Google by Jeff Dean and Sanjay Ghemawat and open-sourced in March 2011. It was designed not for maximum ratio but for very high throughput, targeting CPU-bound scenarios inside Google’s own infrastructure — MapReduce, Bigtable, and internal RPC systems — where decompression speed is the bottleneck. The algorithm is LZ77-inspired and deliberately avoids entropy coding, accepting a lower ratio in exchange for simplicity and speed. Its wide adoption in open-source infrastructure is notable: Snappy is the default compression algorithm for MongoDB, RocksDB, LevelDB, Apache Cassandra, Hadoop, Apache Parquet, and InfluxDB.

S2

S2 is an extension and improvement of the Snappy format developed by Klaus Post as part of his klauspost/compress Go library, first introduced in August 2019. While Snappy already prioritized speed, S2 redesigns the block format and encoding strategy to simultaneously improve both ratio and throughput. S2 can decompress all valid Snappy data (backward compatible as a reader), but its own output is not readable by the original Snappy library — though it can optionally emit Snappy-compatible output at higher speed than Snappy itself. On typical machine-generated data, S2 in default mode can reduce compressed size by up to 35% compared to Snappy while improving decompression speed. On AMD64 with assembly-optimized paths, S2 stream compression exceeds 10 GB/s.

LZ4

LZ4 was developed by Yann Collet (who later also created Zstandard) and first released in April 2011. Its singular design goal is extreme speed: compression throughput routinely exceeds 500 MB/s per core and decompression can exceed 1 GB/s per core, making it one of the fastest compressors ever published. Like its LZ-family relatives it uses dictionary matching, but with a deliberately simple scheme that minimizes branch mispredictions and memory accesses. LZ4 trades ratio for that speed and is not competitive with gzip or Zstd on ratio. It was integrated into the Linux kernel in version 3.11 for SquashFS, pstore, and crypto layer compression, and ZFS on Linux, FreeBSD, and macOS supports it for transparent filesystem compression.

XZ / LZMA2

LZMA (Lempel–Ziv–Markov chain algorithm) was developed by Igor Pavlov starting in 1998 and became the compression engine powering the 7-Zip archiver’s 7z format. LZMA achieves exceptional ratios by combining a very large dictionary (up to 4 GB), a sophisticated match finder, and range encoding (an arithmetic-coding variant). LZMA2 adds multi-threaded compression by splitting data into independently compressed LZMA streams. The xz file format and XZ Utils were released in 2009 by Lasse Collin as a bzip2 successor, and XZ became the standard for distributing Linux kernel sources and software packages across Fedora, Debian, and Ubuntu — though both have since migrated to Zstandard. XZ gained unwanted notoriety in March 2024 when a supply-chain backdoor was discovered in XZ Utils 5.6.0 and 5.6.1.

pgzip

pgzip is a pure-Go parallel gzip library developed by Klaus Post (github.com/klauspost/pgzip). It is a drop-in replacement for the standard library’s compress/gzip, producing fully standard-compliant output that any gzip reader can decompress — the parallelism is transparent to consumers. Internally, pgzip splits input into independent blocks (defaulting to 1 MB each) and compresses them concurrently across available CPU cores, then stitches the resulting gzip members together. On multi-core hardware, compression throughput scales roughly linearly with core count, and pgzip also offers a Huffman-only mode reaching ~450 MB/s per core when ratio is secondary to predictable speed. It is the natural choice when gzip compatibility is required but single-threaded gzip becomes a bottleneck.


Methodology

  • Data: sample_data/data.h3 (~8.5 MB real H3 timezone binary from localtimezone), sample_data/data_mock.h3 (~1.2 KB mock)
  • Iterations: 2 warmup + 10 measured; median values reported
  • Timing and memory measured in separate passes to avoid ReadMemStats stop-the-world pauses distorting timing
  • Environment: AMD Ryzen 9 7900X, amd64, linux, Go 1.26.1
  • All libraries are pure Go — no CGo dependencies

Permalink

California Oak Tree Identification

February 7, 2026

Location Leaf Trunk & bark Size Other characteristics Species
Coastal 1‑2.5 in long, thick leathery, spiny margins; glossy green above, faint hair line in vein axils below Trunk grows to 3 ft thick; bark smooth gray‑brown when young, darker gray with broad ridges Up to 100 ft tall Evergreen, dense rounded crown Coast Live Oak (Quercus agrifolia)
Coastal 2‑6 in long, 3‑7 deep lobes, finely haired underside Trunk 3 ft thick (up to 5 ft), bark gray and fissured Up to 100 ft tall Deciduous, alligator‑like bark; acorns mature in one year Oregon White Oak Oak (Quercus garryana)
Inland 5‑10 cm (2‑4 in) long, round deeply lobed; matte green top, pale green underside, soft fuzz Bark alligator‑hide ridged; trunk up to 10 ft diameter Up to 98 ft tall, trunk up to 10 ft diameter Deciduous; acorns 2‑3 cm; masting Valley Oak (Quercus lobata)
Inland 4‑8 in long, 6 lobes, bristle‑tipped Bark dark with small plates; grey bark 30‑80 ft tall, trunk 2 ft diameter Acorns 2‑3 in, mature second season Black Oak (Quercus kelloggii)
Inland Thick leathery, 1‑3.5 in long, margins spinose or entire, fuzzy then smooth Bark thin ~1 in, smooth, gray‑brown, may develop small tight scales Up to 80 ft tall, 2 ft diameter Acorns 1/2‑1.5 in long, two seasons to mature; twig slender; crown may be dense shrub or tree Canyon Live Oak (Quercus chrysolepis)
Inland 1.5‑2 in long, entire or sharply pointed teeth; flat shiny green above, yellow‑green below Young bark smooth gray; older rough irregularly furrowed with scaly ridges; short trunk, broad crown 30‑75 ft tall, spread 30‑80 ft Acorns 1‑1.5 in, mature two seasons; male catkins 2‑3 in Interior Live Oak (Quercus wislizeni)
Inland 1‑3 in long, wavy margins, bluish‑green upper, pale lower Bark light gray checkered ≤60 ft tall, 2 ft diameter Acorns 0.75‑1.5 in, single season Blue Oak (Quercus douglasii)
Inland 1.5‑3 in long, leathery, entire or few sharp teeth; dull blue‑gray above, greener below, somewhat fuzzy Bark gray with narrow scaly ridges, shallow furrows Up to 50 ft tall, short crooked trunk, large twisted limbs, sparse crown Acorns 1 in long with thick warty cap, mature one season Engelmann Oak (Quercus engelmannii)

Permalink

Claude Code With Ollama Setup

January 31, 2026

I tried claude-code-router with Ollama and it didn’t really work with Ollama due to mismatching input/output formats. Even Claude Code with Anthropic doesn’t work well out-of-the box with local (not cloud) models. Instead, at least in my experience, you have to do some more setup to get a useful Claude Code CLI working with Ollama.

The below is a from-scratch setup that makes Claude Code run with a locally running gpt-oss:20b model on Ollama. It’s not as strong as cloud-based SOTA models but at least this runs locally with 32GB of memory and an Nvidia RTX 4090:

  1. Install ollama and claude code

    curl -fsSL https://claude.ai/install.sh | bash
    curl -fsSL https://ollama.com/install.sh | sh
    
  2. Declare a new ollama model with expanded context:

    # create this file with the name "Modelfile"
    FROM gpt-oss:20b
    PARAMETER num_ctx 65536
    
  3. Create model

    ollama create gpt-oss-64k -f Modelfile
    
  4. Set env vars

    # Recommended: add this to ~/.bashrc
    export ANTHROPIC_AUTH_TOKEN=ollama
    export ANTHROPIC_BASE_URL=http://localhost:11434
    
  5. Run claude code

    claude --model gpt-oss-64k
    
  6. Test prompts:

    > list files in current directory
    > create a python script that calculates the first 10 fibonacci numbers called fib.py
    > Run fib.py and show its output
    

Permalink

Switching From Python requirements.txt to pyproject.toml

December 6, 2025

In python, a lot of projects are switching over from the old requirements.txt format of declaring dependencies to pyproject.toml and others. pyproject.toml can condense multiple requirements-*.txt files as well as other package metadata into a single file, simplifying several package maintenance processes.

The steps to switch are:

  1. Create a bare bones pyproject.toml file in the root of your package:

    [project]
    name = "<NAME>"
    version = "<VERSION>"
    dependencies = [
        "dependency1",
        "dependency2",
    ]
    

    Copy a name and version from your setup.py if you’re still using that. Copy dependencies from your requirements.txt. Dependencies in pyproject.toml follow the same format as requirements.txt and support dependency versioning with ==, >=, and < limits.

  2. If you’ve split your requirements-test.txt or other types of dependencies, you can include them in your pyproject.toml as optional dependencies:

    [project]
    ...
    
    [project.optional-dependencies]
    test = [
        "dependency3",
        "dependency4",
    ]
    
  3. Delete your requirements.txt. Hopefully you’re using version control.

  4. Switch your build commands from pip install -r requirements.txt to pip install -e .. If you want to install optional test dependencies too, you can do that with pip install -e .[test].

Permalink

Generating Cloudflare Origin Certificate for Multiple Domains

November 8, 2025

Cloudflare recommends having an Origin Certificate installed on the server that hosts your website (your Origin) so that requests between Cloudflare and your Origin are encrypted and Cloudflare can authenticate your server’s data.

This gets problematic if you host multiple domains on your server. It’s not possible to do so through the UI, but Cloudflare actually supports multi-domain Origin Certificates through its API. To generate and install a multi-domain certificate, use this script:

"""
This script generates an origin certificate from Cloudlare using their API.
It requires the 'requests' library to make HTTP requests.
"""

from cryptography import x509
from cryptography.x509.oid import NameOID
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import rsa
import requests


# Origin SSL Certificate Update API Token
# Generate at https://dash.cloudflare.com/profile/api-tokens
# Create a Token -> Create Custom Token
# Settings: Permissions Zone + SSL and Certificates + Edit
AUTH_TOKEN = "<TOKEN>"

INSTRUCTIONS = """
sudo cp certificates/server.key /etc/nginx/ssl/server.key
sudo cp certificates/server.pem /etc/nginx/ssl/server.pem
sudo chmod 640 /etc/nginx/ssl/server.key
# Restart nginx
# /etc/init.d/nginx restart
"""


def get_domains() -> list[str]:
    return [
        "example.com",
        "example2.com",
        "example3.com",
    ]


def generate_key() -> rsa.RSAPrivateKey:
    # Generate our key
    key = rsa.generate_private_key(
        public_exponent=65537,
        key_size=2048,
    )
    # Write our key to disk for safe keeping
    with open("certificates/server.key", "wb") as f:
        private_bytes = key.private_bytes(
            encoding=serialization.Encoding.PEM,
            format=serialization.PrivateFormat.TraditionalOpenSSL,
            encryption_algorithm=serialization.NoEncryption(),
        )
        print("Private Key for signing")
        print(private_bytes.decode("utf-8"))
        f.write(private_bytes)
    return key


def generate_csr(key: rsa.RSAPrivateKey) -> str:
    # Generate a CSR
    csr = x509.CertificateSigningRequestBuilder().subject_name(x509.Name([
        # Provide various details about who we are.
        x509.NameAttribute(NameOID.COUNTRY_NAME, "<COUNTRY>"),
        x509.NameAttribute(NameOID.STATE_OR_PROVINCE_NAME, "<STATE>"),
        x509.NameAttribute(NameOID.LOCALITY_NAME, "<CITY>"),
        x509.NameAttribute(NameOID.ORGANIZATION_NAME, "<ORGANIZATION>"),
        x509.NameAttribute(NameOID.COMMON_NAME, "<COMMON_NAME>"),
    ])).sign(key, hashes.SHA256())
    csr_bytes = csr.public_bytes(serialization.Encoding.PEM)
    with open("certificates/csr.pem", "wb") as f:
        f.write(csr_bytes)
    csr_string = csr_bytes.decode("utf-8")
    print("Certificate Signing Request:")
    print(csr_string)
    return csr_string


def request_origin_certificate(csr: str, domains: list[str]) -> None:
    # Request the origin certificate from Cloudflare
    url = "https://api.cloudflare.com/client/v4/certificates"
    headers = {
        "Content-Type": "application/json",
        "Authorization": "Bearer %s" % AUTH_TOKEN,
    }
    domains += ["*.%s" % d for d in domains]
    print("Domains:")
    print(domains)
    data = {
        "csr": csr,
        "hostnames": domains,
        "request_type": "origin-rsa",
        "requested_validity": 5475,  # Valid for 15 years
    }
    response = requests.post(url, json=data, headers=headers)
    if response.status_code == 200:
        print("Origin certificate requested successfully.")
        data = response.json()
        certificate_pem = data["result"]["certificate"]  # type: ignore
        with open("certificates/server.pem", "w") as f:
            f.write(certificate_pem.strip())
        print("Origin certificate:")
        print(certificate_pem)
    else:
        print("Failed to request origin certificate.")
        print("Status Code:", response.status_code)
        print(response.text)


def main() -> None:
    domains = get_domains()
    key = generate_key()
    csr_string = generate_csr(key)
    request_origin_certificate(csr_string, domains)
    print(INSTRUCTIONS)


if __name__ == "__main__":
    main()

Permalink

Introductory Computer Science and Software Engineering Topics

August 3, 2025

PID Controller

June 29, 2025

Availability Percentages

February 8, 2025

Javascript/Typescript Decorators Suck

January 3, 2025

Upgrading MariaDB Database Versions

June 2, 2024

Concurrent Python Example

January 1, 2024

Updating UUIDField on MariaDB to Django 5

December 27, 2023

Replacing Setup.py

December 7, 2023

Fixing Mariadb --Column-Statistics Errors

June 5, 2023

Geographic Geometry Simplification

February 2, 2023

Linters

January 2, 2023

Installing Mysqlclient in Python Slim Docker Image

December 29, 2022

Processor Trends

December 19, 2022

Resizing a Ubuntu Disk in a UTM VM

October 19, 2022

Bash File Test Operators

October 5, 2022

Python Generic Type Annotations

May 28, 2022

Mac Menubar Applications

February 17, 2022

Logodust

February 13, 2022

Python Releases

January 8, 2022

Debian Releases

January 7, 2022

Ubuntu Releases and Support Periods

December 18, 2021

Monitoring System CLIs (Top for X)

December 18, 2021

Fixing "EFI stub: Exiting boot services and installing virtual address map..."

December 11, 2021

ARM Support

November 29, 2021

Map Caps Lock to Escape for Vim

November 24, 2021

Download and Convert Youtube Playlists to MP3 Files

July 15, 2021

Nobody Ever Got Fired for Copying FAANG

June 27, 2021

Removing Token Authentication From Jupyter/iPython Notebooks

May 31, 2021

Debian and Ubuntu Releases

February 13, 2021

Setting Up FastAI Fastbook on a Fresh Ubuntu Instance

January 31, 2021

Tip for Developer Tools Startups

January 30, 2021

A Better Go Defer

October 20, 2020

Covid-19 Economy Predictions

October 13, 2020

Basic Docker Monitoring

July 4, 2020

Switching From Go Dep to Go Mod

May 30, 2020

Upgrading LibMySQLClient in Python MySQLDB/MySQLClient

May 25, 2020

Developing Django in Production

May 15, 2020

Quote

March 5, 2020

Sendmail Wrapper for Mailgun

March 1, 2020

Python Release Support Timeline

December 26, 2019

Use the Default Flake8 Ignores

December 14, 2019

Making Pip Require a Virtualenv

December 5, 2019

Engineering Toolbox

November 30, 2019

Node Timezones

November 1, 2019

Sampling Samples

August 21, 2019

Rotating a NxN Matrix in One Line of Python

July 27, 2019

iTerm2 Search History

July 19, 2019

Nginx Auth With IP Whitelists

June 29, 2019

Bash Strict Mode

May 11, 2019

Optimizing Asus Routers for Serving Websites With Cloudflare

May 5, 2019

Browserify, Mochify, Nyc, Envify, and Dotenv

April 1, 2019

Scraping Images From Tumblr

February 24, 2019

There Are Too Many NPM Packages

February 10, 2019

Programmers Writing Legal Documents

January 31, 2019

Solidity Review

November 17, 2018

Likwid

November 9, 2018

My First Server's IP

November 9, 2018

Installing Netdata

September 23, 2018

Interrobang Versus Shebang

July 10, 2018

Bad Interview Questions

July 8, 2018

Showing Users in Different Databases

July 7, 2018

Some MIT (Undergraduate) Admissions Interview Advice

July 4, 2018

Optimize the Develop-Test-Debug Cycle

April 22, 2018

Example of Python Subprocess

March 23, 2018

Spotted in Taiwan

January 20, 2018

Fixing "Fatal Error: Python.h: No Such File or Directory"

December 16, 2017

Cassandra Primary Keys

December 11, 2017

MyPy Review

November 2, 2017

Griping About Time Zones

October 26, 2017

Bundling Python Packages With PyInstaller and Requests

September 23, 2017

Go Receiver Pointers vs. Values

September 4, 2017

Fixing statsonice.com Latency

September 1, 2017

Showing Schemas in Different Databases

August 26, 2017

Straight Lines

June 2, 2017

Emerson on Intellect

May 29, 2017

Core Metric for Developer Productivity

May 21, 2017

How to Capture a Camera Image With Python

May 7, 2017

Python Has a Ridiculous Number of Inotify Implementations

May 2, 2017

Projects: Gentle-Alerts

April 27, 2017

Creating a New PyPI Release

April 24, 2017

Eva Air USB Ports

April 24, 2017

Projects: Git-Browse

March 18, 2017

Cassandra Compaction Strategies

March 5, 2017

Code Is Like Tissue Paper

January 25, 2017

Seen in a Bathroom Stall at MIT

January 24, 2017

Underused Python Package: Webbrowser

January 21, 2017

Pax ?

January 5, 2017

Golang Review

January 2, 2017

Wadler's Law

December 15, 2016

Tunnel V2

December 8, 2016

MultiPens

December 5, 2016

SSH Tunnel

September 18, 2016

That Time I Was a Whitehat Hacker

September 18, 2016

Comparison of Country and Company GDPs

September 8, 2016

Sketching Science

September 8, 2016

Tech Hiring Misperceptions at Different Companies

July 22, 2016

Calculating Rails Database Connections

June 26, 2016

DevOps Reactions

June 12, 2016

Tuning Postgres

June 9, 2016

Fibonaccoli

June 4, 2016