Our guests today are Paul Frambot, co-founder and CEO at Morpho, monetsupply from Block Analitica, and Nick Cannon, VP of Growth at Gauntlet.
In this conversation, we explore the current state of DeFi risk management for lending protocols, what problems have emerged from the existing DAO-based model, and what improvements can be made.
Hey everyone, welcome back to Degen Responsibly. We have a special episode for you today featuring guest speakers from Morpho, Block Analytica and Gauntlet. The focus of today's space is to discuss the current landscape for DeFi risk management, what's wrong with it and how can it be improved. Before we dive into the discussion, guys, for those people who aren't aware, could you guys just give a quick intro of your roles and what your protocol or firms do? Sure, I can go. I'm Nick, I'm at Gauntlet. We are a financial modeling solution provider for primarily DeFi protocols and other people involved in the crypto space focused on primarily economic and market risk along with the applied research team which does a lot of research, novel research around mechanism design, incentives and other things. Hey, I'm Monet. I work with Block Analytica. We're a risk management and data consulting company. We primarily work with MakerDAO as their kind of their responsible risk team but also do other stuff here and there, perhaps for other lending protocols in the space. Hey, I'm Paul from Morpho Labs. I'm CEO there and we're building the Morpho protocol. So basically we have two protocols. One protocol is known as Morpho Optimizer and is a lending protocol built on top of existing lending protocols such as having compound and providing a strictly better raise with similar risk and equity guarantees. And we've recently announced we've been working since a new lending primitive that is trustless and has interesting takes on the risk management and yeah, I think it's going to be a great occasion to discuss those today. Awesome. Yeah, so I think it would be good to first talk about what the sort of landscape is today for risk management. How exactly do Dallas manage risk and who are the primary stakeholders involved? Maybe start with you, Paul, since Morpho does have that unique experience of being both a lender and a user of these platforms. Yeah, sure. So I guess the current space of risk management, like as many different phases depending on the protocol, I guess I'd just focus on having Morpho compound, obviously like Monet could talk about Maker, but I think when it comes to stablecoins, it's like fairly different as well. When it comes to lending, I think you have different layers in risk management. You can come with the intent of just lending some USDC and you want to be very passive about it. So at some point someone is going to basically invest that USDC. And so this sort of investment decision has to be guided at some point by expertise. And this expertise can come up at different levels. And the way the compound protocol initially introduced its model is like basically having the user choosing for all those different risk parameters is like a nightmare. So what we're going to do is that the compound DAO will be managing those risk parameters on behalf of the user. Well, now the DAO is not necessarily, it's like a community of token holders and cannot possibly monitor every possible outcome of what could happen to those USDC. So they started employing risk contractors to basically advise them on those risk decisions. And that's the current, mostly very successful model of Aave and Compound, which is having risk consultants working for the Aave DAO, for the Compound DAO and many other protocols out there. And I have to say this has been very successful over the last years. And to me, and we'll probably have the chance to discuss about it, it just has some scalability limits in the long run. But that's how currently the space is shaped. I would add though that you also have all this data analytics that would come to advise, provide some sort of visual representation of the blockchain. And I would not say advise, but actually provide insightful, good insights to the user. I think for example, the blockchain has very good data on liquidation price and throughout different lending protocols. So I would say those are how the space is shaped today. Cool. Nick or Monet, is there anything you want to add there? I think Paul covered it pretty well. I think in his article, he talked about liquidity and I'd say probably the primary job to be done or pain point that the Compound and Aave's of the world are solving. And I'm not speaking for the community and all the delegates or stakeholders. It's a bit more on growth to borrow or TVL versus the long tail of users sometimes. And there's a lot of trade-offs to be had there or considered there from a risk management perspective. And then the comment on the legal liability, tough to say given these core teams, Aave being primarily European, Compound being primarily or at least originally US based. Different considerations there. And usually the DAO doesn't have a lot of insight into those decisions, not just from the core teams, but the large stakeholders, which sometimes are either like US based VCs or elsewhere. Yeah. So I think we can sort of dive into some of the points that Paul made in his article. I think he basically called out the existing DAO based model as it is today, not being well suited for risk management. There's various reasons for that, I guess, from being without being agile enough to its members not being experts at risk and just create problems like broken incentives for risk managers. So why don't I get more of the risk consultant side from either Monet or Nick. Do you agree with the points that he points out about the limitations as they are today? I can hop in for a moment. I think he was really right to call out that there, at least in theory, is a bit of a principal agent problem with basically risk service providers and then being protocols that they're advising. The service providers, at least financially, their incentive is to kind of continue having the contract sort of renew. And then also the delegates and the token holders for these various protocols, they might not have the expertise to really sort of determine whether the service providers are doing a good job. So I know in his article, he called out like a theoretical situation where there's two different service providers bidding on a contract with one of these protocols and like sort of different risk parameters. And then they're also competing on price. So yeah, I think it's difficult for communities to really align behind who they should be taking a close from. And then whether or not they actually just accept the advice as is and kind of vote stuff through or whether they want to incorporate other strategic goals like maybe they want to favor certain tokens in their protocol because they have a partnership with this other protocol. So yeah, there's a little bit of like theoretical alignment issues with service providers versus protocols and then also protocols with the users. The incentives aren't perfectly aligned all the way down that step. I'd echo a lot of that. And then, you know, if a DAO is successful, like, you know, as both Paul and Monet have sort of indicated, like Pondtown and Aave definitely have been and are. They do actually become a DAO as in like more decentralized and there's more delegates, more professional delegates, often more service providers. So like while the governance forum and CT can seem more contentious, like the alternative of this success is like a reversion back to the core team, which we've seen like a lot of people that try to set up a DAO or a forum or a discord and they go sort of dark. And a lot of those controls or, you know, risk parameters that they hope the community or potentially service providers would tune and optimize over time actually just revert to still sort of the original core team that's doing it. Yeah. So it sounds like you guys both agree with some of the limitations that Paul calls out. But I think Paul also mentions a deeper fundamental structure issue where I think, Paul, you talked about the decentralized broker versus protocols, where today's Aave Compound are more like decentralized brokers where you have this on-chain governance system. But I think you're arguing more that for the future of finance, we need a more a DeFi primitive that is governance minimized, where you kind of eliminate these trust assumptions that still exist in lending protocols today. Yeah, I can expand on that. That was the question, by the way. But I guess on this point where decentralized brokers versus protocols, I guess it depends on the use case and the scenario. For example, on stablecoins, I feel like this sort of decentralized broker model is very well appropriated where it's very hard to come up with an actual protocol where the complexity is pushed back to the user just because every single user is tied to the same stablecoin. So, you know, no one can basically express preference because they're all tied to the same risk profile in some sense, which is embedded into that same token. So I think in the case of stablecoins, it makes a lot of sense to have this sort of model where the DAO is managed. managing the risk or, you know, having some entities or sub DAOs to actually basically manage that risk for them, for it. In the case of lending, I think it's fairly different because there are so many different lending use cases. We're seeing, you know, long tails, real world assets are becoming increasingly interesting as well. But, you know, there's obviously the blue chips, you have cross chain. So it's very multidimensional. And the truth is that users are not necessarily tied with one another in the sense that, you know, they may want to express different risk preference. And the truth is that it is true that in order to scale, it's good to fungibilize liquidity under the same risk model. And this is what, you know, Compound and Aave has been doing extremely well. But at some point, and you could see over the last year, the number of risk parameters that you have in those lending pools is becoming really, really important. It's like, I think on Aave, DAO, we're close to 700 intertwined risk parameters. And the complexity just keeps growing if we want the protocol to grow. And I think, you know, that is not scalable also because parameters are intertwined with one another. You have to take efficiency tradeoffs. So that's not efficient enough. And obviously there is like different trust aspects. So those are the three reasons why I believe an approach where, you know, we basically have the protocol instead where only the logic of the loan is executed at the privilege layer and the complexity of risk management is pushed back to the user. And basically some users will know how to do risk management. Okay, maybe not the majority, but some users will know and they would directly interact with the privilege. And those that don't know, well, they can basically, you know, in some way have, you know, pay whatever risk advisor to actually help them to do so or supply liquidity in vaults that are actually doing risk management on their behalf. And this way you sort of reproduce the, not only do you reproduce the user experience for the user, but you also re-aggregate liquidity for the end user. And I think that that model for lending, where you have protocol that is completely trustless and also, you know, much more privileged, so it's more efficient below. And then on top I have some sort of like a risk permissionless risk management layer to me feels like a stronger model, a more scalable model, and also a more clean and trustless stack. But again, I think that's specifically true for lending and I'm not sure this would apply correctly to you. I'm very curious to have Manet's and Nick's opinion on that one. Oh, Manet or Nick, do you guys want to respond to that? Yeah, if I can briefly talk about that. I generally think that Paul makes really good points about scalability and trying to make like a protocol that's as general as possible at the base layer. It opens you up to stuff like listing, longer tail assets, and individual users can decide whether or not they want to participate or have exposure against them. So I think it sort of broadens the event horizon of what you can do with DeFi lending protocols. And then also, with how many different assets and parameters are involved in Aave and compound money markets, you do see that it gets really difficult and time consuming for them to just kind of update all these parameters as conditions change. I think that there's still a bit of opportunity to kind of push forward the sort of existing compound or Aave style model of lending though, where if you figure out ways to automate some of these parameters in a way that's responsive to risk conditions and markets, you can kind of abstract a lot of these parameter change proposals into much higher level sort of formula or algorithms that you only need to update very infrequently if you notice that it's reaching some sort of unproductive local maxima. So I think that there's still a lot of opportunity to get to a more trustless state with a lot less tinkering for just kind of like the current model of lending as well. And they won't necessarily be as general as sort of like a really unopinionated base layer lending protocol, but at least at Maker, we've seen the vast majority of demands and sort of use from different users in the space is mainly on larger assets. So you can still kind of capture the majority of the demand for lending. Theoretically, you might be able to capture that with a protocol that's still kind of based on that, the Aave and Compound model. One of the first things we did about like two years ago when we secured an engagement with the Aave DAO was to put up a snapshot for what risk profile the DAO sort of wanted. Try to keep it high level, aggressive, conservative, and moderate, I think, were the choices. We sort of dissatisfied a lot of people. I think most people fell in the middle, Buffett and Pathe, and we sort of landed on a moderate model for value of risk, of course, our models change for how we model the risk over time, as well as sort of the DAO's perception, but also like the DAO's stakeholders are extremely fluid. The voting populace is extremely fluid, new blockchain clubs, new whales, and new community members and things like that. So it's been historically difficult and extremely fluid, but at the end of the day, functional. Even across all the new deployments, I think when we started, there's only main net polygon and now there's considerably more. So yeah, it's been a natural and evolving tension ever since. Rage for improvement, for sure. I definitely echo some of the sentiment with Paul on that. Regarding the point that there's an ever-increasing number of risk parameters that you may need to update, is that something that you think is sustainable in the long term for risk managers? Do you guys also have that competing balance of driving capital efficiency versus risk management when proposing these parameter updates? We do. Capital efficiency was the meme of crypto Twitter a year and a half ago. And then post FTX, capital efficiency was no longer. It's just like, hey, make sure we don't blow up. So that's a little bit of like the risk profile in narrative and revealed preference through the forum and a lot of snapshot polling and votes and similar. Is this sustainable to some extent with more headcount and more models? And we feel like we've scaled pretty well across all the Aave L2 deployments and elsewhere. Others might disagree though. Yeah, just quickly on this. I think like, so first, yeah, I mean, we have to acknowledge that this model has been in practice working very well across like different chains. Like even though, you know, it seems complicated to have every day DAO votes being pushed, like in practice, this is what's happening and it's been, you know, successful so far. What I would be more worried about is like in the long run, having like more, you know, unfortunate events like the one we've seen with the CRV position where basically the model becomes too complex for just like monitoring to actually be able to capture every possible outcomes of such pools on so many different markets. So my point would be that, you know, bad things can happen and they almost already happen many times. And yeah, and I think because of that, we should lean toward more, I would say, permissionless and trustless base layer with permissionless risk management on top. But at the same time, like, you know, I agree with Werner's point. I think there's still room for improving the operations of setting risk parameters. There's definitely ways. Like I think there is like this risk DAO paper that is really good about like basically devising a formula for the maximum LTV, which is the core risk parameters of each lending protocol. And I think there's, we're increasingly like finding ways to automate those things. I think ZK storage is an incredibly promising technology for that regard, where we'd be able to compute on chain interesting, you know, values that we could be used in a more general format to compute risk parameters. What I'm saying is that it's just pushing back the sort of scalability issues. I don't feel like this will, maybe that can keep growing and keep evolving in the right direction. But I think like the end game to me cannot be, or at least it does not see the right to rely on one single statistical approach for all the lending activity of the entire world. If you compare it to like traditional finance volumes where you get to have like trillions of dollars of volume daily in lending activity. I don't believe we can have one single, you know, pool of liquidity, even with automations and even with scaling the number of people, you know, monitoring those risk parameters. I would be very uncomfortable to have like all the liquidity of the world into one single pool. And again, maybe that can be divided into multiple pools, et cetera. But again, it's like pushing back the problem, the fundamental problem, I think. Oh, for sure. I mean, you don't want to be, you don't want to have one risk manager for all of crypto, right? That'd just be bad. I think there's something really appealing about the idea for kind of like the protocols as Paul was kind of describing them in his article, that like they can't go bankrupt versus A lot of the lending facilities that we have now, like Compound or Maker or Aave, the risk is kind of held at the protocol layer, which has the benefit of basically those DAOs are providing implicit insurance for their users. But the idea of a base layer of lending primitives that just by definition cannot become insolvent, it is really appealing. It feels like that has a lot going for it to make it kind of like a safe sort of foundation to build on top of. Interesting points. I think you guys certainly agree that there's still some improvements that could be made to the existing model. I think one other point that Paul also called out was the lack of transparency around some of these, I guess, closed-end models that you guys use on the risk consulting side. Do you think maybe open sourcing those models or providing more transparency on how you sort of model these value-at-risk events could make the space safer overall? Yeah, just to be clear, I don't think the program relies on the fact that not everything is open source in risk management. I think that's just a consequence of how the incentives are designed in the current model where there are actually very few incentives for risk managers to fully open source all the models they use, just because otherwise, why would the DAO want to pay a consultancy fee if everything is open source and very accessible? So I think my point is that would be much better to have open source stuff, but at the same time, currently, the way the system is designed does not anchor it for open source. But obviously, I think Nick would have probably a much more complete take on this than me. I mean, yeah, we've open sourced a good amount of our methodologies around VAR and how we weight models and how we do CMADS to sort of extrapolate historical trends and things like that. And of course, we protect some of our agent IP and model IP. We are super bullish CK co-processing to be able to more provably tell DAOs and communities our model weights and our inputs without revealing too much to Monet or other contractors as we see fit. I don't think, and while I'm super bullish, what Morpho is building and those models perfectly solve that. We wouldn't just completely open source everything Gauntlet's building. We sort of have a lot of things we think are valuable, but also want to lean into the open source ecosystem as much as possible and CK as much as possible. We'll do that. Yeah, there's obviously a good amount of trade-offs that we haven't perfectly solved in how we communicate, but the trend and the pull of any DAO that we operate in has always been like, explain, tell us why. And we've tried to lean into that as much as possible. Yeah, I think protocols, it's sort of like an implicit thing of when they're deciding or sort of deliberating over what service providers they're hiring and how that whole contracting process works. I think DAO voters do kind of keep this in mind, but if you create a situation where it's too competitive and basically you will just switch the service providers that you're working with on a dime because somebody undercuts their price 5% or 10%, it can create negative incentives where the service providers just financially aren't really incentivized to open source as much or to do more automation research or stuff like that, that's going to kind of reduce the workload that they can go for. So I think it's kind of just an interesting sort of layer to the whole DAO service provider relationship that it's kind of a repeat game. Basically you need to think about how that whole hiring process really flows through to the incentives and the sort of work products and transparency that you're going to get out of that relationship. So yeah, it's a really kind of a complicated situation. It's complicated for the DAOs too, right? Like any service provider that's serving a DAO and asking for that transparency of saying like potluck or block analytic and show your methodology in our public forums. The Aave forks of the world, the compound forks of the world are then just sort of scooping that data or at least good strapping or then using that as information, which maybe they rightfully should. So the DAO actually wants to protect the IP and the service providers they're paying for as well. Yeah, good points. That's definitely not a hard thing to do, being fully open source. I guess this goes back to one of Paul's other main points is, do you believe the current DeFi protocols today, as they are, can be sustainable in the long run? Or do we need, or do you think we will also need another primitive that serves that need better? Sustainable? Yes. Do I think they've unlocked all the demand? No. I would probably admit a bit more doubts in the sustainability aspect of things. Like I think, so what's interesting about Morpho is that it's at the same time the largest, like the third largest lending protocol in Ethereum, but it's also the largest lending protocol user. And we've been building on top of Aave and Compound for the last two years. We deployed like a year ago. And you know, like the first thing you realize when you're like an actual builder on top of Aave and Compound is that it's actually a very moving ground. Like there are upgrades, you know, some protocols are, some versions are depreciated, new assets are coming in, some are frozen, some are not frozen. And if you're building on top of Aave, like Morpho is building on top of Aave, it's really, really complex to actually be able to build very, I would say, sustainable businesses. So the core itself could remain sustainable, but like the integrations on top would, you know, probably would not be able to unlock as much value as they could have been doing on top of a more immutable primitive. So that's like more on the demand side of things. On the sustainability side of things, I would say that as someone building on top of Aave and Compound, I've been also worried, like genuinely, genuinely worried of what were going to be the risk decisions by the DAO in some intense moments where the DAO had to react quickly or some quorums had to go fast, et cetera. I mean, I think this, you know, again, like I don't want to be taking like, like the mufflers to the, but the idea is that if it can happen, I think at some point, like it will happen. Maybe not now, maybe not next year, maybe not in the next two, three years, but you know, maybe in the fourth year, we'll have a very bad event or some very large bad debt that will, you know, follow with some bankruptcy. I'm not saying that this will necessarily happen. Like, I'm just thinking like, as someone built that has been building on top of those lending pools, it's a very uncomfortable position to be in, to basically witness incidents after incidents, like a debug after a debug, but it's hard for me to believe that this is the long-term model, at least as it is now. I think if the Aave platform wants to keep existing in five years from now, it obviously has to evolve a lot from where it is right now and on different aspects. But yeah, I would emit a bit more doubt on the sustainability set of things. Yeah, I'd be curious to hear from you, Nick. Did you guys, I guess, model the impact that Curve would have had if the price had hit the liquidation point and the sort of impact it would have had on the whole Aave ecosystem? We did. We put it in the forum. I don't recall the exact insolvency expectations, the different drawdowns. They were in the millions. They weren't the full position. Obviously, you know, there's this composite order book you have to take into account where it's not just like the DEX and centralized exchange liquidity, but clearly, you know, Michael Ageroth was able to source plenty of OTC demand. So how do you consider that, given what you definitely can prove and show sort of on-chain or if you're reaching centralized APIs and otherwise? And then trying to just best inform the DAO and sort of the delegates of the decisions to make. Luckily, we were just echoing a lot of the sentiment we had shared for, you know, nine months in and around Curve and increasing that asset and trying to deprecate it to get that position and that exposure over to B3, which is more isolated. Yeah, that was sort of the goal of our education campaign. I guess you can call it there. I think the Curve situation is really interesting as well because I think a lot of people have sort of reacted to it as evidence that the current sort of managed or like you could call it a brokered model just doesn't work fundamentally. But I think in a way, it's actually just very specific to Aavev2 having fewer kind of like risk controls available just at the protocol level. And already even with Aavev3 and like the sort of iterative improvements with, you know, isolation mode and debt ceilings and stuff like that, you can already see that it'd be much more safe to have a Curve position exist over there than on B2. So, yeah, I think it is a little bit of evidence of the struggles that the decentralized broker model of protocols have, but it also kind of points towards it not being like a lost cause and that, you know, if you continue iterating and improving on these mechanisms, they still have, you know, a bit of gas in the tank. Yeah, I would add though that the direction that the decentralized brokers are supposed to take is that they have to increase the number... of parameters, how to increase the complexity of the pool in order to basically tackle every single hedge case. And I think that would be the path forward for decentralized markets, increasing the control that the DAO can have and the flexibility and the dimensions that they can actually monitor. I think that raises new problems, which are obviously scalability and efficiency overall. And also, just readability. I think for token holders to truly understand what they're doing when they're raising a bar cap or changing an LTV in IVV3, I'd be surprised if most of the voters do actually understand what they're voting for, to be frank. I guess there is definitely a catch in the tank for this model. I'm just thinking that I don't believe that's the most scalable and trustless and efficient way forward. But yeah, definitely a path there. Yeah, why don't we move the conversation a little bit towards what a potential solution or improvements could look like. Starting with you, Paul, you talked a lot about having governance minimized protocols as the base layer. I think you guys are also building something at Morpho, a permissive lending protocol. Anything you can share there? Yes. Unfortunately, I'm not able to share much about it. The next primitive that Morpho is building is just a trustless and efficient lending primitive with permissionless market creation. That is incredibly primitive and that enables basically recreation of the abstraction that people are familiar with Aave or Compound. But what's nice about having a primitive is that you can project different risk appetites, different conditions for the user, and it all ends up in the same base microprimitive, which is completely trustless. And because it's doing one thing, it's basically much more efficient at the cooperative layer without basically having to sort of fungibilize every single risk profile. So yeah, not much I can share more at this stage. Overall, I think you understood that I'm very bullish on separating this risk management layer from the base protocol in order to have trustless, efficient, and scalable lending. And I think this is what we need in order to get to the next orders of magnitude. Because again, I'm sure Aave is able to get a plus 50% or plus... In the meantime, for Nick and Monet, I think you guys maybe have different viewpoints than Paul on having the need for a primitive layer. But you still think there's room for improvement in the current DAO-based model. Could you kind of talk about what those improvements would be? Is that automation of parameter updates? Is that new protocol versions with isolation mode? Yeah. Yeah. I mean, not completely skeptical. Obviously, super bullish on Paul and his team to sort of build this out. I think they sort of exist. These models exist on the spectrum, excited to see sort of what the market decides. I definitely can see a world where there's like gauntlet alpha, gauntlet D-gen, gauntlet institutional models being put out in the world. And then like users can sort of pick those risk profiles or those tools to deposit, borrow, lend, whatever in. Yeah. I think that could be cool. Yeah. I agree. I think catering to like a broader range of risk preferences is like really, really interesting and definitely will bring a lot of benefit to the space. I think even if you kind of adopt that model of like just fully permissionless decentralized based layer, a lot of the sort of mechanism design that is being put to work at places like Aave or Compounder Maker and automation and parameter management, risk management, a lot of that same insight is going to end up being applied on those permissionless protocols as well. Just basically add like one or two layers higher in the stack for the pool managers who are setting up the various sort of risk parameters or basically like yield aggregators where they're deciding what pools to invest in and what parameters to be adopting for them. So I think that the sort of jobs to be done are actually pretty similar for both of these sort of models, just depends on kind of where in the stack it's being applied. Sorry for disconnecting. I had some Wi-Fi troubles. I don't know how much you catch from what I said. I think we missed the last part of your statement. Oh yeah. So I think I just mostly said it all. I was just saying that, you know, operational improvements that we can get to get more parameters, which is the path for decentralized brokers, likely going to get us like a plus 50% plus 100% improvement from where we are now. I think what we need for DeFi today is like a 10x improvements or 100x improvements to actually get at least a little bit closer to, you know, something that's more, you know, competitive with traditional finance, which to me is like the end goal. Yeah, I don't believe incremental improvements and other complexified operations should be the way, but again, I'm happy to be proven wrong here. And also very agree on what Manit said on the automation. I do think that this is like a global effort as an ecosystem, and that will be reusable on both approaches, but at different layers. So yeah, very bullish on this. I think that's a good place to wrap it up on our discussion. Well, I think this was a great discussion on DeFi risk management, where the space is today and where it's going to evolve in the future. I appreciate having you all guys on Data Response Slate and, you know, look forward to having future discussions on risk management. Yes, thank you for hosting this. I think like more generally, the more people talk about risk management, the better, just because I think it's not sufficiently, you know, debated in the space. Obviously it is, but I think there's so much room for improvement in general that I'm grateful that we have the opportunity to speak about those subjects.