Based on materials Android Authority
Representatives of the Android Authority spoke with John Poole, founder of Primate Labs,
He also shared details on why hecreated Geekbench in general, about problems in other benchmarks he used in the past, and much more. Those who speak English can watch the full interview here.
How did you come up with the idea for Geekbench and what problem did you want to solve with it?
It all started back in 2003 when I moved fromPC on Mac with G5 system, which was the first 64-bit computer. I did a lot of tests on it and found that it's not much faster. This confused me a little, so I downloaded some of the popular Mac benchmarks available at the time to see if my system was the problem.
Benchmarks showed that the G5 is faster and works like thisthe same as all the other G5s, which seemed strange to me. So I decided to rework one of the popular benchmarks and found that the benchmarks were very small and synthetic. They performed very simple tasks that could not serve as a measure of overall performance. They just focused on how fast your processor is and didn't take anything else into account, like memory.
Miscellaneous
Affiliate material
Reality and prospects of the IT professions market
What professions are the most popular and highly paid?
Saturday coffee #239
Pour a cup of invigorating Saturday coffee andcheck out the news of the week. Android tablets with HMS Core appeared in Russia, the Honor smartphone went on sale, new benefits for owners of electric cars, and a good series was released on Apple TV + ...
Suzuki SX4 test. Not like everyone else.
The Suzuki SX4 crossover, which will be discussed in this article, was released in 2016. Up to this point, the SX4 in the company's lineup was represented by a hatchback and a sedan ...
Evolution or degradation of display repair?
A brief digression into the history of display module repair and reasoning about where it all goes and why.
Then I decided to write my own tests andsee what happens. It was my side project that I worked on for about three years. Then, in 2016, the first version of Geekbench was released as a free download.
At that time, we received a lot of great feedback from people who helped us grow into the business we are today by providing tests to millions of users every month.
How much has the company grown since the first version of Geekbench? Perhaps you no longer work on the program alone?
Right now we have a small but powerful team in Canada and we mostly work remotely, especially after the pandemic. The entire team is based in Ontario, most of the people are from Toronto.
We have people in different roles:some are focused on the benchmark itself, while others are more focused on developing a simulated AI workload that we're working on. Then there are the data scientists who analyze the results to make sure we have good statistical accuracy, and then there's me, the nice face of the company.
You mentioned that the biggest problem withother benchmarks is that they are small and synthetic so they don't mimic actual usage. How exactly is Geekbench 6 different and how is it better?
We have 15 separate workers in Geekbench 6loads that we use to measure CPU performance. We tried to pick different tasks that reflect what we think people use their computers and smartphones for on a day-to-day basis. So we're really trying to figure out what people are planning to do with their devices.
We focus on things like compression, thisimportant because when you download apps to your smartphone, Android unpacks them and then installs them. We have HTML tests because people spend a lot of time in their browsers, which is an important metric.
There are videoconferencing, they have gained momentum inpandemic time. We have a background blur workload where your face is visible but the background is blurred so people can't see your bedroom for example. This load was not relevant three or four years ago, but has become important due to the pandemic.
Day by day we try to look at thosetasks that are CPU intensive and really important to the device so that they are not just small and simple tasks. This is important because we don't want Geekbench to exist in a vacuum. We don't want this to be a test that just tells you if this processor is better or worse. We need it to reflect what people are actually doing on their devices so they can decide if it's time to upgrade.
You mentioned that you are working on AI benchmarking. Can you tell me more about this?
We had machine learning tests on Geekbench5, and now there are new ones in Geekbench 6. As I mentioned, there is a background blur workload that mimics what Zoom does, where we segment the image: this part of the image is the foreground, so we don’t blur it, but this is the background, so we blur it.
We also have several other workersloads, including the photo library load, which includes some of the steps that can be performed when importing photos into the library. Apps like Google Photos, for example, will use machine learning to tag your images, making it easier for you to find photos of your baby or cat later on.
We also have a separate test that wereleased back in 2020 and which is still under development. We look at machine learning performance across a wide variety of workloads and take traditional models and applications such as image recognition, object detection, face detection, and on-device translation. We run them not only on the CPU, but also on the GPU and neural processor to evaluate their performance.
And since many neural processors andmodern machine learning frameworks are focused on the trade-off between performance and accuracy, we also try to reflect this in the form of a metric. But it is focused on machine learning and is not as widely applicable as the Geekbench batch solution.
Can you tell us a little more about Geekbench 6?
Geekbench 6 is the evolution of Geekbench asa real benchmark that measures the performance of the processor and graphics of the last few versions in specific tasks such as web browsers, photo apps, and social media filters. This is what people do every day.
In Geekbench 6, we tried to further boostkeeping the benchmark up to date with tasks like background blurring, which I mentioned earlier. We were also trying to figure out how people use machine learning to organize their lives, so we created the photo library workload I mentioned earlier.
Gadgets
As an advertisement
Robot vacuum cleaner 360 Botslab P7
Budget robot vacuum cleaner with voice control, the ability to build room maps and an operating time of up to 90 minutes, as well as wet cleaning.
360
Buyer's guide. Compare Samsung Galaxy S23 Ultra and S22 Ultra
We compare the two flagships from Samsung - what has changed, what improvements are there and what to look for. Is it worth upgrading from the previous model.
New Smartphones: Top 5 Smartphones in January
January is the most boring month of the year in terms of new smartphones. In this post, I'm going to take a look at a few…
Overview of the Huawei MatePad SE tablet
A budget tablet with a high resolution screen, a SIM card and a nice HarmonyOS system…
We have also improved the datasets weuse for some other workloads. So workloads that were already in Geekbench 5 now work with large datasets in Geekbench 6. Mobile devices are an obvious example of this. There's a difference between the camera sensors that phones had in 2019 when Geekbench 5 came out and the sensors that we have now with 48MP and 108MP cameras. Thus, there has been a dramatic increase in the size of images, and applications have to cope with this. We're trying to answer questions like "how does your phone handle a 48-megapixel image captured by your camera?" Thus, an important push for Geekbench 6 was the need to make datasets larger and workloads more relevant and realistic.
Another thing we have done is completelychanged the approach to multithreading in Geekbench 6. In Geekbench 5, we always separate the results into single-core and multi-core. In Geekbench 6, we still have the same single-core and multi-core results, but we've actually changed the way we get multi-core results.
Geekbench 6 results cannot be compared to Geekbench 5 results as it is a completely different test. What about versions like Geekbench 5.1 and 5.2? Are the results always comparable?
Previously, 3.0 could not be compared with 3.1, but 4.0 - from 4.1. Although we can identify a lot of issues before the software is released, we miss some points and get feedback from people after the software is out. We then process this feedback and fix bugs within one to two months.
So right now it's hard to tell if Geekbench 6.0 will be compatible with 6.1, but next versions like 6.2 and 6.3 should be compatible as we mostly add support for new devices.