Been thinking of writing this a while and have an hour or so before football. Folio users may ignore this thread. That's an entirely different animal.
Before we see any loan LC has reviewed it in sufficient depth to its own satisfaction to assign an interest rate sub-grade. Not only do they have all data that is eventually available to us in the browsenotes but, given direct access to the borrower, they clearly have much more. They have a team of seasoned professionals that lead the underwriting process and they have been doing this a long time. They have developed a proprietary loan scoring model and every bit of historic loan data available to us is also available to them. They have access to expensive historic data from the credit bureaus likely not economic for the rest of us to obtain. Finally, history has shown LC has done a pretty good job of achieving its goal to "... provide higher risk-adjusted returns for each loan grade increment from A1 to G5." (actually not so well with D, F and G).
LC policy makers select an interest rate structure that they feel will maximize their fees at any point in time, and loans are uniformly underwritten to conform to that structure. All loans graded XX on any day are to the very best of LC's ability the same and no loans are improperly graded XX when they should be YY.
So why is it that we believe we can use a subset of the data available to LC, filter for zero inquiries, no business loans, income >$3k/mo., etc. and get the "best" notes? Typically when we filter we filter on grade, not sub-grade. That's pretty coarse risk bins. I guess it's possible to create a model better than LC's given the data we have or can get, but I don't see large obvious factors persisting over time.
Nonetheless filtering is widely practiced and by all accounts appears to work. What am I missing here? (Please avoid using the word heretic; also dummy if possible)