Microsoft Orca AI Model Learns and Imitates GPT-4 Capabilities
Microsoft, following in the footsteps of other tech giants such as Google, is making significant investments in artificial intelligence (AI). The company’s CEO, Satya Nadella, is leading the charge, as evidenced by its multiyear, multibillion-dollar investment in OpenAI, the creator of ChatGPT. While Large Language Models (LLMs) like ChatGPT and Google Bard have impressive capabilities, their size necessitates significant computing resources, which can be limiting. To address this issue, Microsoft has developed Orca, a 13-billion parameter model that emulates the reasoning process of Large Foundation Models (LFMs).
Meet the Orca
Unlike ChatGPT, Microsoft Orca is a smaller AI model developed and tailored for specific use cases. According to a Microsoft research paper, Orca learns from the vast database provided by GPT 4’s approximately trillion parameters, including explanatory traces, complex instructions, and detailed thought processes, while overcoming the enormous challenges of large scale. information processing and versatility of tasks. Due to its small size, Orca does not require large, dedicated computing resources. As a result, it can be optimized and tailored to specific applications without the need for a large-scale data center.
One of the most significant factors of this AI model is its open source architecture. Unlike the privately held ChatGPT and Google Bard, Orca supports an open source framework, which means the public can contribute to the development and improvement of a small LFM. It can take on private models built by big tech companies by harnessing the power of the public.
Although based on the foundation of the Vicuna, another instructed model, the Orca exceeds its capabilities with 100% complex zero-spot benchmarks such as Big-Bench Hard (BBH) and 42% AGIEval.
ChatGPT competitor
According to the research paper, Orca not only outperforms other programmable models, but also performs at the level of OpenAI’s ChatGPT in the BBH benchmark, despite its smaller size. In addition, it also demonstrates academic proficiency in competitive exams such as the LSAT, GRE, and GMAT in both zero settings without CoT, although it lags behind GPT-4.
Microsoft’s research team claims that Orca has the ability to learn through step-by-step explanations from both human experts and other large language models (LLMs) in an effort to improve the models’ capabilities and skills.