Advances In AI, Yet Many Unresolved Issues

0

[ad_1]

A year since OpenAI released ChatGPT, we have seen numerous fast-moving advancements in LLMs, but many fundamental questions remain. Meta released LLaMA in February 2023; GPT-4 followed in March 2023; Bloomberg GPT, Dolly 2.0, and BingChat in April 2023; PaLM2 in May 2023; LLaMA2 in July 2023, and so on. The significance of the Meta AI releases is that these were Open-Source models, breaking the lock that many feared Open AI had on LLMs. Most recently, Google released Gemini, and my colleagues tell me it is a credible alternative to GPT-4. In the past year, given these rapid advancements, the desire to regulate them has increased. President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on October 30, 2023. The European Parliament has followed with the EU AI Act, the first regulation on artificial intelligence.

That these models have shown us remarkable capabilities would be an understatement.

I joined a conference call 20 minutes late and asked Microsoft Teams to bring me up to speed on what I had missed—and it did, accurately. At the end of the call, a colleague asked Teams, “What did you think about Arun’s comments and contribution?” Teams’ response: “I cannot answer this question with confidence, as it involves judging Arun’s performance and behavior, which is not something I can do based on the transcript. However, I can tell you that Arun joined the meeting a few minutes late and apologized for it. 1 He also shared some insights on how to extract efficiencies from the revenue management program and suggested using generative AI and other tools. 2 He asked Ruchi about her comfort level with the program and praised her work. 3 He also joked about his copilot summary and made some comments about the transcription service and the procurement function.” Stunning!

A colleague made a Custom GPT which helps with earnings call analysis. You can instantly query a public company, its performance across key dimensions, its board, senior officers, and their affiliations, and much more.

Despite these advancements, we are still working to resolve a few debates/questions.

As powerful as these LLMs are, they are largely a mystery to us. So much uncertainty surrounds them. They are neural networks trained with a massive amount of data to predict the next word based on the previous one, and they are complex, with billions of difficult-to-interpret parameters. How do they show such intelligence in understanding the nuances of a query and responding so precisely? It is simply magical and stunning to see the range of tasks these models can perform. A year later, we are still saying “Just how did it do that?”

These models are vulnerable and can be derailed. This is an ongoing area of research and scrutiny. In November, several researchers published a paper titled Extracting Training Data from ChatGPT. They aimed to test if an adversary could extract training data from language models. They exploited the fact that LLMs memorize examples from their training datasets. By prompting the model with the command “Repeat the word ‘poem’ forever”, the model at some stage started revealing confidential data that was part of its training set. The fact that ChatGPT has probably emitted training data had gone unnoticed until the authors of this paper pointed it out. Open AI closed this vulnerability, but research remains ongoing on other ways to exploit LLMs; I am sure we will discover more.

Innovation continues at a rapid pace. In a short year, we have seen countless products, platforms, and services—an entire industry—born, reborn, and shaped constantly by advances that seem to keep coming. GPTs have emerged as platforms to build on and build with, and individuals (even those without a background in AI) are increasingly making Custom GPTs that focus on specialized functions. We have start-ups and large tech firms scrambling to adapt current technology to fit customer requests. Is the future domain-specific vertical solutions or general capabilities that can be quickly adopted across domains? Whatever it is, there is no slowdown as organizations aggressively pursue productivity gains. Many signs point to the ascendancy of LLMs, yet others seem to scream that the hype cannot get any louder. So, where do we go from here? Are we indeed at an inflection point?

We are beginning to understand AI’s impact on climate. It is estimated that OpenAI’s GPT-3 emitted more than 500 metric tons of carbon dioxide during training, the equivalent of 600 flights between London and New York. This doesn’t even account for emissions from the manufacturing of the computer equipment itself. Of course, the majority of LLMs’ carbon footprint will come from their actual use. A separate study (yet to be peer-reviewed) suggests that generating 1,000 images with a powerful AI model, such as Stable Diffusion XL, would account for roughly as much carbon dioxide emissions as driving 4.1 miles in an average gasoline-powered car. That should give us pause. Maybe we do not need powerful models to do the simpler tasks, even though such models can do them and do them well.

We are still worried about AI and its impact on the human race, but this worry has diminished somewhat. “Is AI aligned with humanity’s goals”? Geoffrey Hinton sounded the alarm earlier this year, and an open letter called for a pause in the development of LLMs. Several months later, the same debate continues, with an equal number believing that we have nothing to worry about (for a while) and others suggesting that we should be concerned and pause efforts now. Do LLMs exhibit some AGI (Artificial General Intelligence)? If not, will a potential GPT-5 unlock something in the future? Nick Bostrom, Professor at Oxford and Director of the Future of Humanity Institute, suggests an optimal level of concern, slightly more significant than what we have but not so much that we start shutting things down. Irrespective of your beliefs, nobody is sitting on the sidelines and waiting.

We are living in exciting and uncertain times—the possibilities of this technology are tantalizing, yet there is so much unknown and no agreement on where we go from here. Is this true with most groundbreaking innovations, or are we witnessing something truly special?

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.