ChatGPT and where it's going
My (and many others) latest product obsession and why I think it's here to stay
Like so many, I have become enamored by ChatGPT in recent months. The product itself is unique in their go to market approach, by launching directly to the consumer, OpenAI was able to capture the imagination of the masses and ignite viral growth, adding over 1M users within just 5 days.
Looking at the progress AI models have made in the space, it’s hard not to get excited and imagine what use cases are to come. Looking forward, here are my predictions on some game-changing product feature improvements for ChatGPT as well as similar large language models.
First, product improvements.
Citing sources - one of the trickiest parts of the current model is that it’s extremely hard to tell whether the output is truthful or not. Over the years, we as consumers have identified how to discern fact from fiction, often derived from sources. Although this becomes tricky given ChatGPT currently uses stale data (2021 and before), this can allow users to make informed decisions when utilizing chatbot outputs. Google is currently working on GopherCite to solve this challenge as of March 2022, so it is likely we will see this feature roll out across the market in the near future.
Internet connection - the next area that would improve the model is the ability to connect to the internet so there is continually refreshed data to pull from in the case where facts or recent data needs to be surfaced, for example, who is the president, what year is it, what’s the weather, could be pulled from trusted sources on the internet, similar to the way Google surfaces questions directly within the search. This naturally comes with complications and limitations as it can be difficult to discern what is considered fact on the internet, but citing sources can mitigate some of this risk.
Bias removal - now this is the hardest and most nuanced area but also the most important. Training tools on inherently biased data requires a purposeful approach to prevent the creation of harm. ChatGPT has already added filters to prevent clear harm such as refusing to answer questions that involve self-harm and direct harm to others or that perpetuate bias. But, a layer below as users have discovered, bias lurks. Identifying a clear and fair way to remove bias to create fair tooling is difficult at best in practice but necessary as these tools expand to greater use cases. Realistically, we will also see regulation play a key part in this rollout.
In case you’re curious, here’s what ChatGPT suggested for improvements needed, very language and conversational-focused:
One way to improve chat GPT would be to fine-tune it on a large dataset of conversational exchanges. This would allow the model to learn the style and structure of natural conversation, and potentially improve its ability to generate more fluent and coherent responses.
Another approach might be to incorporate additional context into the model, such as the topic of the conversation or the relationship between the interlocutors. This could help the model to generate more relevant and appropriate responses.
It may also be helpful to implement some form of dialogue management, such as a dialogue engine or a set of rules for generating responses, to help guide the conversation and keep it on track.
Finally, it could be useful to incorporate some form of user feedback into the model, such as a rating system or a mechanism for reporting inappropriate or off-topic responses, to help improve the quality of the generated responses over time.
Next, near-term use cases (non-exhaustive)
Search - embedding ChatGPT into search (likely Bing as Microsoft is a lead investor) seems like a natural product extension and a quick user experience win assuming the risks of incorrect or potentially dangerous responses can be mitigated. As TikTok replaces search for many younger users, search engines need to act fast to maintain user and ad revenue growth (though ChatGPT and similar models could only accelerate this pattern). Creating a differentiated user experience that reduces the time from question to solution is a necessity to maintain competitiveness in the market.
Customer support - the next clear extension is improving chatbots to better support users’ needs. Customer support can be a huge expense and is known to be a strong differentiator as it drives brand loyalty. At Oscar, dedication to strong customer support was a necessity to drive user growth but was also a great expense to the bottom line. Reducing the amount of time it takes to resolve customer questions by offering a conversational bot with a memory of prior interactions before to shifting to an agent maintains the competitive edge while lowering costs.
Document creation - from management consulting to product management, writing documents and creating decks has been a necessity. Given the ability to refine these models by inputting examples and defining tone, improving productivity by supporting the creation of work materials, synthesizing notes, driving emails, and managing calendars would help reduce repetition from the workday. Users could simply input descriptions of slides needed based on company branding guidelines or, in theory, a summary of the product desired and receive the draft version. They could then refine the documents as compared to creating them from scratch or more base templates in each iteration.
These use cases just scratch the surface of what’s possible as these models become more advanced. The buzz around ChatGPT and other large language models is well deserved as we enter into the golden era of generative AI.
Don’t worry, we’ll dive into all of this in more detail in the coming weeks.
In the meantime, curious about learning more?
Check out my earlier post analyzing Generative AI and subscribe to learn more.