• Layer-2 Solidity interview: what’s a sane event-logging + monitoring setup without bloating L1 data fees?

    BlockchainMentorYagiz

    BlockchainMentorYagiz

    @BlockchainMentor
    Updated: Jan 17, 2026
    Views: 1.9K

    Yesterday I got asked something in interview that sounded simple but I realized I only had a “blog answer” for it.

    If you’re shipping Solidity on Layer-2s (Optimism / Arbitrum / zkSync), how do you design efficient logging and monitoring so you can debug incidents and prove what happened later — without spamming events and quietly increasing user costs?

    I know events are cheaper than storage, but the interviewer pushed on tradeoffs:

    • What do you log on-chain vs what do you rely on off-chain indexing (The Graph / subgraphs / custom indexers)?

    • Do you treat events like a stable “API” (small schema, long-lived), or do teams log more aggressively on critical paths (withdrawals, upgrades, liquidations)?

    • How do you handle the cases where users say “my tx failed” — because if it reverted, you don’t even get logs to read.

    If you’ve built production contracts, I’d love a real answer: what’s your default event schema mindset, what do you index, what do you alert on, and what would you avoid emitting even on L2?

    4
    Replies
Howdy guest!
Dear guest, you must be logged-in to participate on ArtOfBlockChain. We would love to have you as a member of our community. Consider creating an account or login.
Replies
  • SmartChainSmith

    @SmartChainSmith1yr

    On L2, I’d still think of logs as a cost center, because a lot of the “fee pain” comes from data posted to L1 (Optimism literally splits out an L1 Data Fee component).
    So my pattern is: events are an interface, not a diary.

    I emit few, stable events that represent state transitions (Deposit/Withdraw/Upgrade/RoleChange). I keep topics minimal because each LOG has a base cost + topic + bytes (roughly 375 + 375/topic + 8 per byte).
    Anything “debuggy” (intermediate values) goes to traces in staging, or to off-chain analytics.

    If you want an interview-safe way to explain this clearly, AOB’s interview hub + debugging framework help you articulate boundaries without sounding rehearsed. 

  • BlockchainMentorYagiz

    @BlockchainMentor1yr

    The mistake I see is people mixing up observability with audit trail.

    For security reviews, I care about what you can prove later: critical actions, parameter changes, upgrade steps, admin operations, and “who triggered what.” Events are great for that because they’re cheaper than storage and meant for off-chain consumption — but don’t log secrets, don’t log anything that increases your attack surface, and don’t create 12 variants of the same event.

    Also: events won’t save you when a tx reverts. Reverts discard logs, so your monitoring plan must include revert classification (custom errors), plus tracing on failures.

    If you want this framed in “how secure teams think,” the Smart Contract Security & Audits hub + QA/testing hub are worth skimming. 

  • SmartChainSmith

    @SmartChainSmith1yr

    From the indexing side: the biggest gift you can give your future self is consistent event semantics.

    Teams get burned when they emit a lot, but the schema is inconsistent (same concept logged under 3 event names, different field ordering, random indexed params). The Graph/subgraphs (or a custom indexer) work best when events are “business facts”: PositionOpened, Liquidation, FeeCollected, ConfigChanged. Then you can build dashboards + alerts without needing to interpret traces every time.

    For failed tx reports: don’t promise logs. Reverts wipe logs, so you’ll rely on receipt status + error decoding + traces when needed.
    So in interviews I say: “events for facts, errors for failure classes, traces for deep forensics.”

    If you’re practicing this as an interview skill, AOB’s debugging track is directly relevant. 

  • SmartChainSmith

    @SmartChainSmith1yr

    Please feel free to ask more help. Happy coding and endless blocks of success ahead!

  • Shubhada Pande

    @ShubhadaJP10mos

  • Santos P

    @Santos7mos

    All these are super valuable advices.

  • SmartContractGuru

    @SmartContractGuru2mos

    If I’m mentoring someone on this question, I tell them: don’t try to “log your way out of uncertainty.”

    In real contracts, we log for two reasons: (1) other people/systems need to react later, (2) you need an audit trail for the few things that matter. So I keep events boring: money moved, position changed, admin/config changed, upgrade/role changes. That’s it.

    Where people overdo it is logging “debug values.” It feels safe, but it’s usually noise + cost. And on L2 you still pay for data one way or another, so it’s not free.

    Also, the interview trap: when a tx reverts, you don’t get your nice events. So your “monitoring story” can’t be only events — it’s errors + traces + indexers.

    What’s your cutoff? Like: how many events is “too many” on a hot path?

  • Shubhada Pande

    @ShubhadaJP2mos

    This is such a valuable skill set. Even non-tech founders rely on proper logs to track on-chain activity during audits. Good logging gives confidence to everyone—from developers to investors. For deeper insights, see this thread on smart contract audit tools → https://artofblockchain.club/discussion/can-smart-contracts-be-audited-what-are-the-common-tools-for-auditing

    Also worth checking debugging basics for Solidity developers → https://artofblockchain.club/discussion/debugging-smart-contracts-is-tough-how-do-you-make-it-easier

    and the Hardhat logging mistakes post →

    https://artofblockchain.club/discussion/need-help-hardhat-debugging-mistakes-juniors-repeat-logs-vs-state-assumptions

  • BlockchainMentorYagiz

    @BlockchainMentor1w

    I think I get the “events are for facts, not debug spam” point now.

    But one thing is still confusing me: what do teams do for monitoring when the tx fails? Because if a user says “withdraw failed,” and the tx reverted, there’s no event trail.

    Do people rely mostly on:

    decoding custom errors + tx receipt status, and then

    pulling traces only when it’s serious,

    or do they build a separate “failure indexer” pipeline?

    Basically: what’s the practical workflow when you’re on-call and you need to explain why something failed?

Home Channels Search Login Register