Exploring the Technology Behind Meta’s AI Chatbot
Meta, led by Mark Zuckerberg, has introduced its Llama 2 chatbot through its AI division. Microsoft has been chosen as Meta’s favored collaborator for Llama 2, making it accessible via the Windows operating system.
Meta’s approach to Llama 2 differs from that of OpenAI, which created the AI chatbot ChatGPT. This is because Meta has made its product open source – meaning that the original code is freely available, allowing it to be studied and modified.
This strategy has sparked a wide wave of debate. Will it promote greater public oversight and regulation of large language models (LLMs) – the technology behind AI chatbots such as Llama 2 and ChatGPT? Could it inadvertently allow criminals to use the technology for phishing attacks or malware development? And could the change help Meta gain an edge over OpenAI and Google in this fast-moving industry?
Whatever happens, this strategic move looks set to reshape the current landscape of generative AI. In February 2023, Meta released its first LLM version, called Llama, but made it available for academic use only. Its updated version, Llama 2, includes improved performance and is more suitable for business use.
Like other AI chatbots, Llama 2 had to be trained using online data. Exposure to this vast resource of information helps it improve its operations – providing users with useful answers to their questions.
The original version of Llama 2 was created with “supervised fine-tuning,” a technique that calibrates it for audience use using high-quality question-and-answer data. It was further enhanced with human-feedback reinforcement learning, which, as the name suggests, incorporates human evaluations of the AI’s performance to align it with human preferences.
Guaranteed benefits
Meta’s adoption of an open source ethos with Llama 2 allows it to leverage an approach that has worked for the company in the past. Meta engineers are known for their product development that helps developers like React and PyTorch. Both are open source and have become the industry standard. Through them, Meta has created a precedent for innovation through cooperation.
The release of Llama 2 holds the promise of safer generative AI. Through shared wisdom and collective exploration, users can identify misinformation and vulnerabilities that criminals can exploit. Unexpected apps have already appeared, such as the user-created version of Llama 2 that can be installed on the iPhone, highlighting the potential of this community’s creativity.
But there are limits to how far Meta will allow Llama 2 users to monetize their AI system. If a party reaches more than 700 million active users in the previous calendar month for a product based on Llama 2, it must request a license from Meta. For Metal, this opens up the possibility of profit sharing from successful Llama 2-based products.
Meta’s strategy differs sharply from the guarded approach of its primary competitor, OpenAI. While some question Meta’s ability to compete in this space and commercialize products like OpenAI has done with ChatGPT, Meta’s decision to invite global developers into the fold suggests a larger vision. It’s a move that positions Zuckerberg’s company not only as a player, but also as an intermediary that taps global talent to fuel Llama 2’s growing ecosystem.
This strategy could also be an ingenious hedge against potential competition from other tech giants like Google. As a large number of users explore the possibilities of Llama 2, any successful advances can be quickly integrated into Meta’s other products. Only time will tell the full impact of this decision, but the immediate effects on the industry are already widespread.
Benefits and pitfalls for users
Public testing of the open source technology allows for wider scrutiny and provides the user community with an opportunity to assess Llama 2’s strengths and weaknesses, including its vulnerability to attack. The observant eye of the public can reveal flaws in LLMs, prompting the development of defenses against them.
On the downside, concerns have been raised that this is akin to “handing the knife to criminals” as it could also allow malicious users to exploit the technology. For example, its power could help scammers build a dialogue system that creates believable automated conversations for phone scams. This potential for abuse has led some to call for regulation of the technology.
But exactly which rules are designed, who has the power to oversee this process, and exactly which ones need more or less oversight all require careful planning to ensure that regulation doesn’t just support monopolies for big tech companies.
In the unfolding saga of AI development, the open source debate serves as a reminder that technological advances are rarely simple or one-dimensional. The effects of Meta’s decision will probably spread in the technology world in the coming years. While Llama 2 may not yet rival ChatGPT’s features, it opens the door to the development of numerous innovative products.
Google is also under scrutiny as speculation mounts about how it might respond. In an era where open source culture is flourishing, it wouldn’t be surprising to see Google follow suit with its own releases.
The term “technology for good” has become a common mantra to describe technology companies that use some of their resources to make a positive impact on the lives of all of us. At the end of the day, however, this goal remains a shared responsibility, not just something that a few companies should participate in.
It is also a goal that requires cooperation and joint efforts between universities, industry and other parties. As LLM technologies continue to evolve, the stakes are high, and the path forward is fraught with both opportunities and challenges.