Reactor mk. I as a guarantor of efficient and sustainable AI world

ARC
4 min readJun 11, 2024

--

Table 1. Reactor mk. I excelled across three datasets: 93%* on MMLU, 91% on HumanEval, and 88% on BBH, all showcasing outstanding performance.

Our Mission

Beyond offering state-of-the-art computing efficiency in the AI world, ARC’s leading goals for Reactor mk. I include a dedication to the sustainability of our planet. This model addresses current issues with energy usage in AI and strives to align itself with the green agenda. Conventional AI models often require a large amount of processing power, resulting in high energy consumption and significant environmental impact. Reactor mk. I, on the other hand, has fewer than 100 billion parameters, enabling it to operate at peak efficiency without using excessive amounts of power.

This focus on sustainability is crucial as the AI sector faces increasing criticism for its environmental impact. With the rising energy requirements of large-scale AI models, the need for greener AI solutions is more pressing than ever. Reactor mk. I aggressively addresses these concerns by promoting ethical AI practices and reducing resource consumption.

Test Cases and Computing Performances

To showcase the state-of-the-art performances of our model, we used three widely known test data sets in LLM world: Massive Multitask Language Understanding (MMLU), HumanEval, and BIG-Bench-Hard (BBH). Shortly, the (MMLU) is a suggested tool to evaluate a model’s problem-solving and world-knowledge. It includes 57 topics, such as computer technology, law, US history, and rudimentary mathematics. Even the top models continue to show low MMLU performance in expert-level accuracy across all 57 tasks, despite the constant developments in AI that have been observed recently.

Next, an evaluation test HumanEval was developed to measure the functional correctness of programs synthesized using docstrings. HumanEval is used in many machine learning scenarios, especially in the context of LLMs. It evaluates if the code produced by LLMs is functionally accurate and gives models programming tasks to complete by creating code from docstrings. Finally, the dataset BIG-Bench-Hard (BBH) is intended to assess the skills of LLMs in classical NLP, mathematics, and common sense reasoning. To push the boundaries of existing language models, this dataset contains over 200 tasks.

Reactor mk. I showed high-level results when tested on all three datasets. First, it scored 93%* on the MMLU dataset. Next, it achieved a 91% score on the HumanEval, significantly outperforming benchmark results. Finally, the model obtained a BBH evaluation score of 88%, which is also characterized by remarkable performance.

Sustainability Contribution

In general, a training process of an LLM has a significant energy and environmental impact, therefore the sustainability approach in AI model development is considered to be of the highest importance. For example, carbon emissions and high electricity usage are the main consequences of the large computational power needed by models such as GPT-3 and GPT-4. In the process of powering these AI models, data centers frequently use non-renewable energy sources, which adds to greenhouse gas emissions. Data center cooling devices also use a huge amount of water and produce wastewater that can contaminate nearby water supplies.

Compared to others, the Reactor mk. I uses significantly less energy while operating. As we have seen previously, it gets high scores on benchmarks while leaving as little of an impact on the earth as possible. The following figure presents the energy overhead in MWh needed to train different LLM models. It is easy to understand from the figure that Reactor mk. I needs less than 1 MWh for a training phase, which demonstrates a significant energy efficiency, differentiating it from other large-scale AI models like GPT3 and GPT4.

This way, ARC enables lowering the carbon impact of AI technologies and supports the larger goal of protecting the environment by encouraging the use of models that use less energy.

Figure 1. Comparison of model training energy overhead in MWh for GPT-4, GPT-3, and ARC’s Reactor Mk.1.
Figure 1. Comparison of model training energy overhead in MWh for GPT-4, GPT-3, and ARC’s Reactor mk. 1

What do the achieved results mean?

In comparison to popular LLMs, ARC’s Reactor mk. I has demonstrated exceptional computation efficiency across the majority of benchmarks. Next, Reactor mk. I has notable results over other versions in terms of sustainability parameters. Less energy, fewer resources used, and less carbon emissions provided are the big advantages of Reactor mk. I. In addition it uses significantly less water and electricity than other models because of its low energy consumption.

Therefore, ARC is leading the charge to transform the AI industry thanks to its innovative technology. By delivering innovative AI solutions and upholding a strong commitment to environmental sustainability, ARC sets new benchmarks for the AI world.

Register for the waitlist: https://www.helloarc.ai/reactor

*Actual score: 92.9%

--

--

ARC

We believe that Al should serve and enhance human experiences, not replace them. ARC solutions are built to support and empower humans.