Agree with @sreeramkannan here, the design is not technically feasible. One federated learning / verifiable compute idea I found interesting from awhile ago is [2102.05188] CaPC Learning: Confidential and Private Collaborative Learning, maybe it could be helpful to @mdesim01
However, this does raise an interesting question around FL: instead of using cryptographic/statistical solutions to align the different participants and avoid malicious behaviors, where can economic security fit in to nudge behaviors? I think the catch here is accountability (slashability), which may just push the question back to the crypto/stats proving part.
All in all, personally still think we are still a long way from FL being usable on a bigger scale mainly because of a lack of performance given the security and privacy constraints.