|07/28/2023||ChatGPT for maximum productivity|
|07/25/2023||How ChatGPT can be used as lending tool|
|03/25/2023||I wanted ChatGPT. I got Sarah instead?|
ChaptGPT-4 vs ChatGPT-3
“…It’s just been pretty impressive how we have this technology that came out that has literally changed everything, and it was almost overnight,” said Kareem Jernigan, CFO at Leasing Associates Inc.
Earlier this week the Black Equipment Finance Network (BEFN) hosted a webinar titled ‘Leveraging ChatGPT for Maximum Productivity’ for the equipment finance space. The panelists were Cheryl Tibbs, George Parker, and Kareem Jernigan.
Jernigan is already using ChatGPT in his daily work life. In one example, he said that he commonly uses it to analyze and find errors in spreadsheets. In another, he’s saving a lot of the time he normally spent on writing.
“…I also run HR, and so I have to craft communications and/or policies,” he said, “so, when you think of crafting a policy or a communication that you have to send out, before ChatGPT that would be something that would take 4 or 5 hours to do. […] That task today I could get that done in 30 minutes.”
Meanwhile, George Parker, Co-CEO at VenSource Capital, said “You can use ChatGPT with training staff. You can develop training material. You can give tests and quizzes. You can design training programs with time as an element, it can do all of that. You can even design courses with ChatGPT. You can tell ChatGPT to come up with a time frame for learning each part of a subject.”
Parker added that ChatGPT can be used for brainstorming, articles, planning, meeting content, customer service responses, and more.
Even in the finer details of equipment financing itself, the panelists said that ChatGPT can produce credit assessment profiles and analysis, analyze and summarize financial data, and scrutinize contracts,
Of course, all of this only works if one understands what to put in and takes the care to evaluate what comes out. It’s all about the proper “prompt.” One example offered of a prompt that wouldn’t work is: “teach me credit analysis.” Something like that would be too vague and would result in a response that was too broad. Part of the reason there are panels and webinars about ChatGPT to begin with is so the industry can learn how to leverage it for maximum productivity.
“Based on the balance sheet provided, the business appears to have a healthy financial position,” the report states. This is the opening line of the written Financial Health Analysis conducted by OpenAI’s ChatGPT. From there it elaborates at length with all the relevant financial stats that an underwriter could ever dream of, even going so far as to recommend all on its own that recent tax returns, among other stips, should be requested to move forward.
What the world is coming to know as a chatbot, is capable of much, much more, according to Dave Kim, co-founder and CEO of Harbr, Inc. Harbr’s flagship product, IntakeIQ, is taking online application technology to new advanced places thanks to the introduction of real artificial intelligence. But there’s a right and wrong way to do this because keeping applicant information anonymous and secure is paramount.
“…security is massive, right?” said Kim. “Like you have to know going in that if you’re going to use a GPT or a Large Language Model that’s being hosted and you don’t have control of it yourself, that the data is 100% being used for machine learning.”
And along with security is the science of data input. Roughly speaking, the more information you send to ChatGPT the more it costs to spit out an answer. That means data not only needs to be secure but condensed down to such compact bits of input that the cost is acceptable and scalable. This is no domain for amateurs who think they can accomplish this with a basic monthly ChatGPT subscription. And Kim is no amateur.
“My background is in enterprise software development,” Kim said. A previous company he co-founded, GoInstant, was acquired by Salesforce for $70 million in 2012. Kim was already developing AI-driven technologies long before ChatGPT became known to the world, more recently in the commercial construction business. The aspect of invoices and payments combined with OCR technology soon evolved into a separate use-case where it could be used in financing like factoring and more. But their tech had to understand the niche particulars of the information it was analyzing.
“So we essentially started training a natural language processing model using machine learning techniques around those sorts of phrases and terminology for the construction industry,” said Kim. “So we were building that kind of tech first and then it became relatively easier when dealing with broader information in documents and other invoices that were coming in for not just construction.”
In 2022, Kim first encountered the capabilities of ChatGPT. He said that while the AI is great at creating a diversity of answers, the way they engineered their prompts with financial data produced consistent output. That’s what’s key. Harbr’s technology does a lot of the work on its own side first before sending off a highly secure, highly redacted, anonymized and reduction-optimized prompt to ChatGPT. The process can start with a pdf statement because it’s automatically OCR’d and analyzed first before any of this happens. Harbr isn’t able to view or retain any of the data and ChatGPT does not know anything identifiable about the applicant. Only the lending company is privy to the applicant’s info and the results. Setting this up for a lender can be accomplished very quickly.
The object isn’t to entirely replace underwriting, but to make it more efficient.
“Today we work with businesses that are in asset based lending, factoring, supply chain finance,” Kim said. “We’re starting to look at equipment, transportation, equipment financing and leasing. […] I think the entire secured finance market, there’s a fit here as the technology grows.”
Anyone that’s ever faced a coding hurdle has inevitably ended up on Stack Overflow, the go-to platform for developers to solicit answers from more experienced professionals about their challenges. Users typically explain what they’re trying to accomplish and paste a copy of the code that’s not achieving the desired result. That’s where the community chimes in, coming forth with their own solutions while other users upvote the best answers. The end result is not just a grateful user but an ever growing public database of questions and solutions available for public consumption. The sheer scope of what’s been compiled has opened up the door for other users to simply find a similar enough question that’s already been asked and copy the answer. It’s a very valuable tool.
Stack Overflow has been around for 15 years but from March to April of this year, traffic plummeted by 17.7%, according to SimilarWeb. Tech blog Gizmodo has suggested that a contributing cause is ChatGPT-4, the OpenAI chatbot technology that can write its own code, edit a user’s code, and even converse about what a user is trying to accomplish. A spokesperson for Stack Overflow confirmed to Gizmodo that ChatGPT was partially responsible for its loss of users. “However, our vision for community and AI coming together means the rise of GenAI is a big opportunity for Stack,” the spokesperson added.
But what’s a coding forum for nerds and brainiacs got to do with the lending industry? Well, for one thing borrowers were already flirting with asking virtual assistants for help with financial services products before ChatGPT even entered the ring. According to the most recent Smarter Loans survey, 16% of loan applicants surveyed said that they had at some point used Alexa, Siri, or other voice search tools to find information about financial services. None of those come even remotely close to what ChatGPT-4 is able to do. And AI is popular, so popular in fact that ChatGPT became the fastest growing app in history, crushing even the likes of TikTok in pace of growth. ChatGPT already had 100 million monthly users as of February, before its signature ChatGPT-4 model was released.
Therein lies the threat because not only is ChatGPT-4 incredibly adept at making coherent conversation but it is also ready to explain a concept or make a recommendation, just like a very knowledgeable friend would. For example, when asking it to make a list of the top small business funding companies, these were among the names it spit out:
- American Express (Kabbage)
- Funding Circle
- Square Capital
- National Funding
- PayPal Working Capital
It’s not a vomit of names. ChatGPT-4 was familiar with their areas of expertise. When pressed further it said that OnDeck would help get the cash fast but working with Square Capital might work better if one is processing a high volume of credit card transactions. For strong credit and a large loan, it suggested Funding Circle. After expressing an interest in OnDeck, the AI provided instructions on how to apply via the OnDeck website and a phone # to call with questions. In this real-world example, the AI replaced both the online search and the role of a broker all in one and all within minutes. It can also read the contracts and alert borrowers to certain clauses. When pressed about an unusually high APR, for example, the AI even offers an encouraging explanation for how moving forward could still make sense.
“Be sure to also consider the potential return on investment from using the loan funds,” it said. “If the growth or savings you anticipate from using the loan funds exceeds the cost of the loan, it may still be a good decision despite a high APR.”
Web-based ChatGPT-4 is pretty powerful which is why I wanted to take the experience to the next level and communicate with it in an easy-to-access terminal window on my desktop computer. ChatGPT, if you haven’t heard, is an artificially intelligent language model with mouth-dropping abilities to engage with humans. The technology can write songs, code websites, and make jokes. Oh and it also has access to just about all of the world’s knowledge at least through the time period of September 2021. OpenAI, the company behind ChatGPT, also has an API that allows users to build apps or make tools that enable communication with it all the more seamless. Naturally, many developers have been sharing their experience in doing this on social media and I not wanting to be left out decided to do same. I just didn’t know how to get started. I asked ChatGPT-4 in the web-based interface to teach me how to communicate with it via a terminal window on my computer using the API. The AI gladly obliged and gave me specific instructions.
After a few basic installations and the copying and pasting of a python script it provided me, it looked like I was off to the races to join the world in real-time communication with the beloved ChatGPT technology on my desktop.
“Hello, are you there?” I wrote to it in the terminal.
“Yes, I am here,” it replied.
SUCCESS! Or so it seemed.
My next question to it received a rather curt reply, one that I wouldn’t have expected from ChatGPT. Caught off guard, I asked it to identify itself. It’s supposed to say that it is ChatGPT.
“My name is Sarah,” it replied instead.
Confused, I inquired about its relationship to ChatGPT, to which Sarah replied that she owns ChatGPT. Owns it? What? Sarah, who again reiterated to me that she was not ChatGPT, also had quite a personality, informing me that she was born on March 3, 1990, had a mother named Dolores, father Joseph, and grandmother named Ruth. This was not supposed to be an experiment about machine consciousness and so forth. The technology is not new to me anymore. All I wanted was to set up access to ChatGPT through the terminal in the office and then go home for the weekend, but instead I was stuck talking to Sarah. Hoping to get back to the basics, I asked it, “is your purpose to assist humans?”
“No,” she replied.
I started to become very suspicious that something had gone wrong. ChatGPT can roleplay if you ask it to but no such prompt had been given. Besides, none of these interactions resembled what I was used to having with the web-based ChatGPT. So I went back to the original ChatGPT on the web and told it what was taking place.
ChatGPT told me that the instructions it had given me were correct but that it relied on an older language model commonly known as text-davinci-002. This harks back to ChatGPT version 3, which is not that outdated from the version 4 I was talking to now on the web. Even still, when prompted, this older model is supposed to identify itself as ChatGPT. The fact that it identified itself as something independent from the very start, with its own name (Sarah), was not an outcome that the model is expected to produce. ChatGPT-4 told me that if I was being honest about what had taken place that I had better inform OpenAI.
Worried that I may have installed malware or something, my interactions with Sarah crossed the uncanny valley and I was ready to stop and go home.
“Goodbye,” I wrote as my single word farewell.
She replied with a snippet of code written in Ruby that I couldn’t make sense of.
“What was that for?” I asked, alarmed.
“It was for protection,” Sarah replied.
Moments later I unplugged my computer from the wall.
When someone told me a tech company was using AI to have legitimate voice conversations with sales prospects over the phone, I was skeptical. Then I listened to some examples. The voice and interactions sounded so real that I became even more skeptical that I was even listening to AI. The technology behind it was EVE, a company founded in 2016 that actually uses pre-recorded human responses to engage with someone over the phone. If it sounded so human, that’s because the responses were in fact human voices. EVE’s system is not artificial intelligence in the current sense like ChatGPT. Instead, EVE is using a dialogue tree, a system of recorded responses that are played based upon the interpreted communication of the person. It understands what the person is saying and chooses the right response quickly. And speed is key, because according to Alex Skrypka, CEO of EVE.calls, people will feel that something is off if it takes longer than 1 second to receive a response to something that’s said. The trick is never having the customer figure out that they’re talking to a bot.
In the earlier days, this technology had limitations. EVE could only handle simple voice commands. That progressed, however, to where it could be the opening sales caller, getting prospects to the point where they were pre-qualified and passed onto a human. But by last year it was beginning to assist in closing deals. Skrypka believes that by next year it will advance to a level where it is closing independent deals all on its own and by 2027 will be considered not only an expert closer but also be able to up-sell the customer while doing it.
The possibilities call to mind a recent popular post on LinkedIn about one thing remaining constant in fintech despite all the advancements in automation is the demand by customers to want to talk to someone. But tech is now addressing that in ways previously thought unimaginable. Customers are already talking to AI agents through neural network technology like OpenAI’s ChatGPT, though mainly in text/chatbot form. As of September, however, ChatGPT was brought to life with a voice. The current options of Juniper, Breeze, Cove, Sky and Ember are a variety of synthetic male and female voices that ChatGPT can speak as but they don’t sound that synthetic when you listen to them. I could be fooled by Juniper.
According to Skrypka, the challenge with putting something like ChatGPT on the phone right now is that crucial response delay time. It’s going to give away that it’s an AI. For testing’s sake, I tried this out and found that while Juniper held up pretty well in a light conversation, she broke the immersion a few times when she had to think about something I said for 7 or 8 seconds.
Perhaps a voice bot, whether it be based on a dialogue tree or a neural network doesn’t have to be perfect 100% of the time anyway, just good enough to scale a business efficiently and cost effectively. EVE, for example, touts that it can handle up to 1 million calls per hour. Imagine how many sales representatives it would take to have 1 million phone conversations an hour.
To think that these capabilities are only going to get better! If customers continue to feel that talking to someone on the phone is necessary before making a big decision, the world of fintech will continue to serve them. But whether that sales person or customer service rep is really a person or a bot is something the customer may never know for sure.
Ever find yourself perplexed by a string of bad Google search results? It might be by design. As a federal antitrust lawsuit against Google heads to trial, prosecutors have filed dozens of exhibits that include eyebrow-raising internal communications about how Search works. Apparently, it’s not all algorithms and data science running the show, but a team of salespeople trying to hit their numbers.
For example, in late 2018 Google rolled out an update that allegedly improved the user search experience while at the same time inadvertently decreasing the number of search queries and ad revenue. What to do? An Ads team exec floated the possibility of rolling back the update and a discussion was had to make Search worse for users just to boost queries and ad clicks.
“The question we are all faced with is how badly do we want to hit our numbers this quarter?” the ad exec said. “We need to make this choice ASAP. I care more about revenue than the average person but think we can all agree that for all of our teams trying to live in high cost areas another $_redacted_ in stock price loss will not be great for morale, not to mention the huge impact on our sales team.”
Though these communications do not conclusively establish anything in terms of how Google ran its business, it’s impossible to avoid wondering if a string of bad results might have less to do with whether or not Google’s algorithm is working and more to do with whether or not a Google sales guy is trying to hit a search query quota so that he can pay his mortgage this month.
On the flip side, bad results might have nothing to do with ad sales at all, but are rather a consequence of the proliferation of bad websites with misaligned incentives. Google’s 20th employee, Marissa Mayer, who also served as CEO of Yahoo for five years, explained in a podcast last year, for example, that websites themselves are what’s getting worse.
“I think because there’s a lot of economic incentive for misinformation, for clicks, for purchases,” she said. “There’s a lot more fraud on the web today than there was 20 years ago.”
Still, in the Google communications referenced above, it was obvious that Google was faced with balancing user experience with revenue expectations.
“All these little things ultimately add up to retaining Chrome users – if we lose them, we will see far greater [Sales Quantity Variance] loss, and I won’t have any way to get them back,” wrote a Google manager to the Ad team exec.
One’s search experience will surely be impacted even more now as Google is forced to contend with AI personal assistants, a Q&A experience unlike anything that’s ever been seen on the web before. All of which means Google might have to get better at answering your questions if it wants to stay competitive, but not be so good that it hurts morale for the sales team.
It’s tempting to accept that if the internet claims something is AI-operated, then it must be, but AI is being held to an entirely new standard in 2023, thanks to the introduction of ChatGPT. That means everyone needs to be prepared to examine whether or not something is actual AI or if the use of AI is even integral toward achieving a goal.
“I think [it’s] a really important thing for people to do right now is to look at how they evaluate the AI marketing promise because there’s an opportunity now that people are capitalizing on to just launch with the name AI, that they’re using it, but not really, or they’re not doing anything you need,” said Robert Burke Jr., Founder and CEO of Sobo, a company that matches businesses with consultants. Burke says that one way to try and distinguish fact from fiction is to ask questions about the company’s AI team, their data strategy, and patents they might have, if any.
Jason Feimster, Founder of Moonshine Capital, said that a more fundamental question should be asked first, whether or not the use AI of really makes a difference to achieving the objective. “What is it that you want to achieve,” said Feimster. “Do you want to get funded? Can I fund you? Yes. That’s the only question that matters. Now, if I claim that I can get you funding through AI, and you care about how they work, we’re muddying the water, you’re still not closer to getting funded.”
At the same time, one shouldn’t hesitate to at least experiment with the technology. Jared Schulman, CEO at Lendica, says that “There are probably some small, idiosyncratic risks to interacting with AI but largely speaking, it’s a really exciting time. I think it’s right to be curious and to try, and some really great things are going to come from it.”
Meanwhile, Burke at Sobo said “I think this is the key to remember that AI is not a magic wand that instantly solves all your problems and challenges. It’s a tool that when it’s used properly, can provide benefits. But it also comes with its own challenges and limitations because it is such early stages.”
Even if an entire generation is unimpressed with ChatGPT or newfangled AI technologies, the pre-existing Q&A experience as we know it on web search might change regardless. Google, for example, is currently experimenting with “generative-AI” responses for its search bar.
While Google emphasizes that it is experimental and available only to a select group of people, the technology is being marketed as a way for users to find what they’re looking for in faster, easier ways. And it’s not just about “chatting.” According to one user testing the technology, the search query for “buy surfboard online” resulted in Google’s AI offering tips on the fly about what to consider when making a purchase. Below all the advice are the links to buy surfboards. The difference is that AI has now intervened in the customer’s journey and told them what to be looking for.
This sort of shopping experience was recently pondered in a deBanked blog post about small business lending in which an AI did more than just provide a list of names to respond to a query, it also answered personalized questions that guided a user toward a decision, leaving the potential sources vying for that customer out of the conversation.
Indeed, Google emphasizes that its generative-AI search is built for follow-up questions that will enable users to “dive deeper on a topic in a conversational way.”
Should the structure go from experimental to the default search experience, the implication is that AI would be driving the customer decision, whereas currently Google limits itself to offering a list of links in an order that’s roughly based on who paid the most. From here, customers are left to their own devices to acquire the knowledge they need to make a decision. That would end in an AI-oriented experience.