Technology Selection: 8 Axes and a 6-Question Checklist from 27 Official Sources
TL;DR
Technology selection is more reproducible when you first narrow candidates by workload / failure mode / existing assets / migration shape, then rank by iteration speed / performance economics / operational standardization / strategic leverage — rather than directly comparing “which is faster” or “which is trending.” This article presents an 8-axis model and a 6-question checklist distilled from 27 official source-backed cases across big tech and startups.
Why I Wrote This
Technology selection discussions tend to collapse into single-axis comparisons: “Rust is fast,” “TypeScript has a huge ecosystem.” But when you actually read official engineering blogs from big tech and startups, the real decision factors are far more compound.
For example, Discord’s move from Go to Rust wasn’t simply “Rust is faster” — it was that GC tail latency spikes didn’t fit their real-time communication workload. Stripe’s migration to TypeScript wasn’t “types are nice” — refactor safety and migration tooling at scale were the real drivers.
After collecting 27 such cases and analyzing them cross-sectionally, I found that technology selection factors normalize into 8 axes, and a 6-question checklist run in order produces reproducible decisions.
The 8-Axis Model
Decision factors extracted from 27 cases, deduplicated and normalized into 8 axes.
graph LR
subgraph "Rank candidates (downstream)"
E["5. Iteration speed"]
F["6. Performance economics"]
G["7. Operational standardization"]
H["8. Strategic leverage"]
end
subgraph "Narrow candidates (upstream)"
A["1. Workload fit"]
B["2. Safety & reliability"]
C["3. Ecosystem & asset reuse"]
D["4. Migration shape"]
end
Upstream 4 axes: strongly narrow the candidate set
1. Workload fit — Does the technology match the shape of the workload? High-QPS + low-latency, long-running orchestration, and CPU-bound batch need fundamentally different tools. Discord moved from Go to Rust because GC tail latency didn’t fit their real-time communication workload.
2. Safety and reliability model — What’s the most painful failure mode? Cloudflare built Pingora in Rust because memory safety directly affects infrastructure reliability. Dropbox chose Rust to minimize data corruption risk in their sync engine.
3. Ecosystem and existing asset reuse — How much of the existing codebase can’t be thrown away? Faire adopted Kotlin for seamless Java interop. No matter how good a new language is, if it can’t leverage existing libraries, toolchains, and team knowledge, adoption cost wins.
4. Migration shape — Big bang, coexistence, or config-change level? Meta’s Java→Kotlin migration at scale found that codemod / translation pipelines mattered more than the language choice itself. Vercel’s Turborepo Go→Rust migration chose incremental porting over full rewrite to keep shipping features.
Downstream 4 axes: final ranking
5. Iteration and workflow speed — Not just execution speed, but the entire dev loop: editor startup, typecheck latency, hot reload. The TypeScript native port was motivated by editor feedback speed, not runtime performance. Shopify’s Project References adoption was driven by incremental typecheck improvements.
6. Performance economics — Absolute performance weighed against infrastructure cost. Figma moved to WebAssembly for a 3x load time improvement that directly impacted UX. LinkedIn adopted Protobuf for serialization efficiency that reduced infra costs.
7. Operational standardization — How to evaluate mixed-toolchain cost. Meta’s Java→Kotlin migration showed that maintaining a mixed codebase incurs parallel toolchains and build speed penalties, making language adoption an operational standardization problem. Airbnb’s Bazel migration prioritized hermeticity, repeatability, and a uniform build layer over raw speed.
8. Strategic leverage — Does this choice compound into future standardization or platform strategy? Swift’s IDE support expansion to Open VSX (reaching Cursor, VSCodium, Kiro) shows editor compatibility itself can be strategic leverage. MoonBit’s Wasm-first design targets cloud-edge and AI-native workflows as a forward-looking platform bet.
Big Tech vs Startups: Which Axes Weigh More
The weight of each axis shifts significantly with organizational context. Cross-referencing 27 cases reveals stable patterns.
graph LR
subgraph "Big tech emphasis"
BT1["Migration shape"]
BT2["Operational standardization"]
BT3["Existing asset reuse"]
end
subgraph "Shared emphasis"
CM1["Safety & reliability"]
CM2["Performance economics"]
end
subgraph "Startup emphasis"
ST1["Workload fit"]
ST2["Iteration speed"]
ST3["Strategic leverage"]
end
- Big tech cares heavily about mixed-codebase cost, shared toolchains, and standardization penalties. Adopting a new technology is less about “is it good” and more about “how does it coexist with millions of lines of existing code”
- Startups care more about workload fit, shipping speed, and leverage. With fewer existing assets, the workload-fit vs. dev-velocity tradeoff is more direct
- Both treat safety/reliability and performance economics as strong axes regardless of scale — but big tech weighs blast radius, compatibility, and org-scale reliability more, while startups weigh latency, infra cost, and shipping confidence more
6-Question Checklist
The 8 axes compressed into 6 questions for practical use. Each question’s primary axis is noted.
- What is the workload? → Workload fit
- What’s the most painful failure mode? → Safety and reliability model
- How much of existing assets can’t be discarded? → Ecosystem and existing asset reuse
- Is migration big-bang, coexistence, or config-change level? Can it be supported by automated translation / codemod / build-fix? → Migration shape
- What are you optimizing for? Execution performance / editor-build feedback / dev velocity / operational reproducibility / ramp-up? → Iteration speed / Performance economics / Operational standardization
- Does this choice compound into future standardization or platform strategy? → Strategic leverage
Applied Examples: 3 Workload Types
A. High-QPS Backend / Infra
- Ask Workload fit and failure mode first — Does the GC model / memory safety / concurrency model match the workload?
- Then Migration shape and existing asset reuse — How to safely navigate the coexistence period?
- Rank by Performance economics and Operational standardization
Representative cases: Discord (Go→Rust), Cloudflare (Pingora), LinkedIn (Protobuf)
B. Large Product UI / Application
- Ask Iteration speed first — editor/build feedback loop, refactor safety, hot reload matter more than runtime speed
- Weight Migration shape and interop heavily — coexistence cost with existing codebase
- Strategic leverage extends to design systems, shared tooling, mobile-web alignment
Representative cases: Stripe (TypeScript), Shopify (React Native, Project References), Figma (WebAssembly)
C. AI / Data / ML Platform
- View Workload fit as system workload, not model name — batch vs. real-time inference vs. streaming
- Ask Operational standardization and Safety / reliability early — training-serving consistency and release decoupling
- Strategic leverage extends to training-serving consistency, team reuse, deployment topology
Representative cases: Uber (Michelangelo), NVIDIA (Triton), Modal (Physical Intelligence)
Findings Specific to Tooling / Language Migration
Cross-cutting patterns emerged from tooling and language migration cases that the 8-axis model alone doesn’t surface:
- Runtime benchmarks alone mislead — The TypeScript native port was motivated by editor startup / typecheck latency / memory usage, not runtime performance. Airbnb’s Bazel migration prioritized hermeticity and repeatability over speed
- Migration shape includes “toolchain replatforming” — Not just rewriting application code, but migrating build systems, editors, and CI pipelines is an independent selection dimension
- Mixed-codebase cost compounds over time — Meta’s Java/Kotlin coexistence accumulated parallel toolchains, build speed penalties, and onboarding complexity. How you design the “coexistence period” significantly affects final ROI
- Zig-family choices are articulated as constraint bundles rather than Rust comparisons — Single binary, deterministic memory, low-level control, native integration, and fast startup form a bundle that fits specific workloads
Conclusion
Improving reproducibility in technology selection isn’t about deciding “which technology is the strongest.” It’s about having an order of questions: first narrow by workload, failure mode, existing assets, and migration shape, then rank by dev velocity, performance economics, operational standardization, and strategic leverage.
How to use the 6-question checklist:
- Run through the 6 questions in order to narrow candidates
- For questions where you hesitate, go back to the case set for representative evidence
- Finally, check against all 8 axes to catch any missing comparison dimensions
This ordering keeps decisions focused even as the evidence set grows.
That’s all.