Interesting conversation going on and now i am jumping in .
@Ayush What you’re describing is exactly why access control isn’t a checklist — it’s a graph problem. And most teams never map the graph after v1.
The worst ACL flaw I’ve seen wasn’t a missing modifier or a forgotten role — it was an implicit trust path created months after launch.
A protocol had a clean RBAC setup:
Governor → high-impact params
Ops → operational knobs
Keeper → routine triggers
Everything looked airtight. But during an incident review, we discovered a new “gas-optimized” refactor introduced an internal function where Keeper could indirectly call a parameter-setting function through a batching utility. Individually, each piece was harmless.
Combined, it effectively gave the Keeper role partial governance authority. No one caught it because unit tests mocked roles in isolation—not the composed flow.
Another recurring issue: time-locked roles that bypass their own delay through upgrade paths. Teams rely on TimelockController for safety, but forget that upgrading the proxy itself is a privileged action that bypasses execution delays.
A single misconfigured upgrade path means the timelock is a theatre prop, not a control.
And yes, privilege drift during upgrades is extremely common. I’ve seen teams add a new initializer to an upgradeable contract but forget onlyProxy/onlyInitializing.
One dev called it directly on the implementation contract and silently reconfigured roles. Zero events emitted, nothing failed, just a quiet ACL mutation sitting on an unused implementation address.
The blast radius is huge because ACL failures don’t scream — they whisper.
They sit dormant until an attacker builds the right call-path.
Curious if you’ve tried mapping your permission graph with Slither’s --print-callgraph + manual tracing? It’s one of the only ways to catch these multi-hop trust leaks before they ship.