How we got here–and how to realistically think about AI when building products.
Everybody is talking about Artificial Intelligence these days. In fact, I can’t remember the last time I attended a cocktail party or read anything on the Internet without hearing about ChatGPT and how it will save (or destroy) the planet.
A significant contributor to this newfound popular interest is the advent of the aforementioned ChatGPT (short for Chat Generative Pre-trained Transformer). If you haven’t heard about it (I know, unlikely), it is a chatbot that was launched by OpenAI in November 2022 to great public acclaim (and some consternation). But is it a good thing or a bad thing? Depends on your point of view and how you use it. If you’re a 6th grader who doesn’t mind bending the rules, it’ll write a pretty good term paper for you on Abraham Lincoln, the War of 1812, or Star Trek.
On the other hand, it sometimes makes things up, for example:
- I just asked it: “Talk about the technical director of LEGO Universe.”
- ChatGPT: “The Technical Director of LEGO Universe was Mark Hansen. As the Technical Director, Hansen played a crucial role in the development and implementation of the massively multiplayer online game (MMOG) LEGO Universe, which was released in 2010.”
That’s kind of cool, but I was the Technical Director, not Mark.
It was also trained on data that is now a couple of years old, so there are a lot of things it just doesn’t know about. For example:
- My question: “Who is in charge of Twitter?”
- ChatGPT: “As of my last knowledge update in September 2021, Jack Dorsey was the CEO of Twitter. However, please note that executive positions can change over time, and it’s recommended to verify the current leadership by referring to the latest news or official Twitter announcements.”
As someone who studied artificial intelligence before most people had even heard of the term, it’s interesting to hear the swirling conversation about it in 2023. In this blog, I’ll give you a brief history of AI and the events that led us to where we are today–and how to use that information before proceeding with AI in your product development process.
A brief history of AI & how we got here
I received my Master’s degree in AI from Yale back in 1985, a year that fell between two notable and well-documented “AI Winters,” a term that refers to a period in the history of AI research when interest in the field declined significantly. This term was coined in 1984 at the annual meeting of the American Association of Artificial Intelligence (AAAI) and is a process that begins with pessimism in the AI community and then in the press, followed by significant reductions in funding, then followed by the end of serious research. Three years later, the billion-dollar AI industry began to collapse (again).
There were, essentially, two different AI Winters:
What ended the first AI Winter? A number of factors contributed, including new theoretical and algorithmic developments, increased computing power, industrial and commercial applications, and government support and funding. And the second AI Winter? In short: it yielded massive, ongoing, real-world successes in a large number of fields like speech recognition, machine vision, face recognition, machine learning, logistics planning, decision-support systems, and the like.
Since the mid-1990s, AI has once again started receiving massive attention due to its potential to revolutionize various industries, from healthcare to finance to retail. Such businesses are looking to incorporate AI into their operations to improve efficiency, reduce costs, and gain a competitive edge.
However, before jumping on the AI bandwagon, it’s essential to understand how we got here and how to realistically think about AI when building products. It’s also important to remember that systems that seem smart might not be as smart as we think. This error has previously been the major driver in the hype that has led to both previous AI Winters.
AI is not new, but it is shiny. How long it stays shiny (this time) depends on both its actual performance in industry and education and on its hype caused by misperceptions and misleading reporting.
So, what events and developments in AI brought us here? What works? What doesn’t? And what does the history of AI teach us as we build new tools and products?
While AI has come a long way over the past 70 years, it’s essential to recognize that it’s not a magic solution to all problems. It is not a panacea that can be implemented without careful consideration of its limitations and potential risks.
Our own hype, over-reliance, and over-estimation regarding its current apparent success could cause a new AI Winter (or at least a cooling of enthusiasm and public support). ChatGPT is a good example of how this might occur: there are now claims that ChatGPT can diagnose diseases, pass the LSAT and MCAT, etc. What will happen when it fails as a super-human brain? This is something we should be aware of with this technology and related ones. As the Bard pointed out, not all that glitters is gold.
A critical aspect of realistically thinking about AI is understanding its limitations. Machine Learning systems are only as good as the data they are trained on. If the data is biased, the AI will make biased decisions. Therefore, it is essential to ensure that the data used to train the system is diverse and representative of the population it will be applied to.
Additionally, it’s essential to understand the potential risks of implementing AI. AI can be used to automate processes, which can lead to job losses, and it even has the potential to manipulate people or make decisions that have a significant impact on people’s lives, like in healthcare or criminal justice. Therefore, it’s crucial to consider the ethical implications of using AI and have measures in place to ensure that the AI is being used responsibly in your business.
How to think about AI for product development
When building products with AI, it is crucial to understand what AI can and cannot do. AI is excellent at processing and analyzing large amounts of data, identifying patterns, and making predictions based on that data. However, it cannot replace human intuition and decision-making entirely. It is essential to have a clear understanding of the problem you are trying to solve and how AI can help you solve it.
So, what should you consider when bringing AI into your product development processes? Here are some questions you might ask yourself:
First, ponder the ethics of what you’re doing. Will the system you are hoping to build reduce public security or safety? Will it put people out of work for no reason? Also, consider if there is enough (and appropriate) data for the machine to learn what you want it to learn. Think about what biases it might be acquiring. Will the system infringe on personal privacy, liberty, happiness? Will any groups of people be disenfranchised as a result? Are you stealing anyone’s intellectual property?
And just how does one use AI in product development?
This is a very large topic, but to break it down a bit, here are a few important options (out of many):
- Machine Learning: This is a branch of AI that involves training models on data to make predictions or perform specific tasks. It can be used to develop software with capabilities such as image recognition, natural language processing, recommendation systems, and more. Developers can build and train models using popular ML frameworks like TensorFlow or PyTorch.
- Natural Language Processing: This technology focuses on enabling computers to understand and process human language. It can be used to develop software applications like chatbots, language translators, sentiment analysis tools, or even automated content generation systems.
- Computer Vision: This involves training models to understand and interpret visual data, such as images and videos. It can be used to build software for tasks like object recognition, facial recognition, autonomous vehicles, or even augmented reality applications.
- Intelligent Automation: AI can be used to automate repetitive or mundane tasks, enhancing productivity and efficiency. This can involve building software applications that leverage techniques like robotic process automation (RPA) or using machine learning algorithms to automate data analysis and decision-making processes.
- AI-based Code Generation: Tools, including ChatGPT, are now able to take requirements (and other specifications) and generate working code.
- Predictive Analytics: AI can analyze large datasets and identify patterns, enabling predictions and forecasting. This can be useful for software applications in various domains, such as finance, healthcare, sales forecasting, and supply chain management.
To leverage AI in product development, developers need to have a good understanding of AI concepts, programming languages, and frameworks relevant to the specific application. They should also have access to quality data for training AI models and a robust infrastructure to deploy and scale the software. Collaboration with data scientists and domain experts can further enhance the effectiveness of AI-driven software development.
AI is a powerful tool that can help businesses improve their operations and gain a competitive edge. However, it is essential to understand that AI is not a magic solution to all problems. Realistically thinking about AI means understanding its limitations, potential risks, and ethical implications. When it comes to product development, it also means having a clear understanding of the problem you are trying to solve and how AI can help you solve it. By doing so, businesses can successfully incorporate AI into their product development operations and avoid potential pitfalls.
Today, AI is once again generating excitement and optimism about its potential to revolutionize many industries. However, the history of the AI Winter serves as a reminder of the need for realistic expectations and continued investment in AI research and development to ensure that the technology continues to progress and deliver on its promise.