US Remote Solidity Take-Home Assignment: Gas Optimization vs Clean Code — Do Interviewers Actually Grade SSTORE/SLOAD and Foundry Tests?

ChainMentorNaina

ChainMentorNaina

@ChainMentorNaina
Updated: Feb 26, 2026
Views: 70

I’m interviewing for US remote Solidity roles and keep getting a Solidity take-home assignment instead of a live coding round. The part I’m stuck on isn’t “can I finish it” — it’s what they actually grade.

Typical prompt: implement a small contract (ERC20-ish / vault-ish), add a couple of edge cases, and ship it with Foundry tests. I can make it “clean” (natspec, readable modifiers, clear revert reasons, consistent naming, good test structure). But I can also go down the rabbit hole of gas optimization interview style changes: caching, tight storage packing, minimizing external calls, and micro-optimizing SSTORE/SLOAD patterns.

Here’s the dilemma: in a take-home, do US interviewers reward “smart” gas wins, or do they penalize anything that hurts readability/maintainability? Like if I use assembly for one hot path, or I refactor to reduce storage writes but the code becomes harder to review.

If you’ve been on the hiring side: what are the usual grading buckets?

  • correctness + security basics

  • clarity + architecture

  • test quality (Foundry tests)

  • gas profiling / measurable improvements

  • “production thinking” (docs, tradeoffs, assumptions)

Also: is it worth including a short write-up (what I optimized, what I didn’t, why), or does that feel like overkill?

Replies

Welcome, guest

Join ArtofBlockchain to reply, ask questions, and participate in conversations.

ArtofBlockchain powered by Jatra Community Platform

  • Damon Whitney

    Damon Whitney

    @CareerSensei Feb 23, 2026

    I’ve reviewed a bunch of these Solidity take-home assignments on the hiring side. Most candidates over-index on “gas tricks” because they think it’s a gas optimization interview. In reality, the first filter is boring: correctness + clarity + test discipline.

    If your solution is clean, readable, and your Foundry tests are tight, you’re already ahead. When I see someone name-drop SSTORE/SLOAD but the tests miss obvious edge cases (re-entrancy surfaces, approvals, rounding, access control), that’s a red flag. If you do optimize, keep it “explainable optimization”: caching a storage read, avoiding repeated writes, packing storage if it’s natural, avoiding unnecessary state changes. No heroics.

    What instantly upgrades a take-home for me:

    • a short README with tradeoffs (“I chose X over Y because…”)

    • basic security reasoning (threat model in 5 lines)

    • one or two stronger tests: fuzzing a boundary, or an invariant like “totalSupply always equals sum of balances” (if relevant)

    If you’re unsure: write the clean version first, then add a small “optional optimization” commit with a note. That shows judgment without making the reviewer decode cleverness.

  • Web3WandererAva

    Web3WandererAva

    @Web3Wanderer Feb 23, 2026

    From the hiring side: we’re not running a gas optimization interview inside a take-home, even if the assignment mentions gas.

    What we usually grade (roughly in this order):

    • correctness and safety assumptions

    • structure and readability (could another dev maintain this?)

    • test intent, not test volume (Foundry tests that show reasoning > 100% coverage)

    • awareness of gas, not obsession with it

    If someone uses assembly or heavy caching without explaining tradeoffs, it hurts more than it helps. We’d rather see a comment like “left this unoptimized for readability; would optimize if this path is hot.”

    Gas optimizations are conversation fuel for the follow-up call, not the core scoring metric in the take-home itself.

  • Victor P

    Victor P

    @TrG6JIR Feb 25, 2026

    I’ve been on both sides of this as a candidate recently, and my mistake early on was treating the Solidity take-home assignment like a gas optimization interview.

    I overthought SSTORE/SLOAD, cached aggressively, and tried to be “clever.” The code was cheaper, but reviewers had to stop and reason about it. One comment I got was basically: “This works, but it’s harder to trust.” That stung — and it was fair.

    Next time around, I focused on clean flows and solid Foundry tests (state transitions, revert cases, one fuzz test). I left most gas optimizations out of the code and just mentioned them briefly in a README — where I would reduce storage writes and why I didn’t touch them yet.

    That actually led to a better follow-up discussion. They weren’t grading how low I got gas; they were grading whether I understood when optimization is justified.

    Big takeaway for me: show you understand gas, but don’t optimize before you’ve earned trust through correctness and clarity.