Forum Replies Created
Is it possible to add the score id next to the graph on the model pages, it’d make it easier to know which score is which . Bcz On models with many scores you get them confused easily..
Actually, that is already planned since the last update. I just never managed to finish it because there are some other details that I want to get done and I want it to be done all at once. It will be done somewhere around mid-July, probably earlier.
Yes, some scores are surely wrong. That’s also why the average counts. However, these outliers have a too big impact on the average score, that’s why we will change the way the mean is calculated, sometimes during this coming summer when we find some time. We will change it to the median instead of the mean, that will eliminate all outliers from the chart!
Yes, that would be a possibility. I’ll put this on the list of possible things to add! For the moment I’ll give priority to things like changing the arithmetic mean to the median however because, in the end, it’s sadly also a financial question. 🙁 If the site grows then everything is possible, haha.
That sounds like a good idea. I had the same “problem”, having my highest scores with Buffer at 32. The only issue I see with this plan is, what to do with all the existing benchmark scores? They’ve been added over the past years and have consistency because everyone ran the same test. The original test (Evan’s Benchmark Test) goes back to ten years ago and was shared on many sites, meaning that a lot of people had run exactly the same thing.
Another question is also: Will the gaps/margins between the various Macs differ in size or will these spaces be the same while all scores shift up or down in overall? Do you know what I mean? In the end, this is probably the most important thing, because that’s what makes the models comparable to each one in the first place.
Haha same here. I’m actually trying to sell my 12-core cMP to get this Mac Mini.
Seen and removed! I saw that there are quite a few models for which there are scores of 20 tracks, which seems unrealistic for most models, and I believe it was spam because 20 is the lowest possible score you can select and therefore the default score selected on the “Submit a Score” page. I guess these scores were added by bots.
So I added a user-login feature through which you can now add scores to avoid that. That meant that all previously submitted scores had to be assigned to my account and it displays my username everywhere. I didn’t add any scores myself, some models don’t have any scores at all: https://logicbenchmarks.com/apple-mac-model/macbook-air-core-i5-1-6-ghz-13″-late-2018/
(they’re not displayed in the chart but can be found through the drop-down search menu)
As for the incorrect average, this is something I have to correct. Instead of having the arithmetic mean displayed (which is the case right now), I’d like to have the median shown, which is less sensitive to outliers. It’s a loss of precision, but very robust to extreme values, and I think that’s more important.
- This reply was modified 5 months, 3 weeks ago by Admin.
I had the plan to add this as well, but I am unsure about a few points:
– Should the “Hackintosh” be added as a separate model from the list, or should the modified spec be listed separately, but as part of the original model (for example like you did)
– If it is the latter, there will be some distortion in the stats of the original Mac on which yours is based.
I’ll see if there’s a way to add it as a separate model without distorting the benchmark.
For now I have planned to be able to see each score individually, with following additional information that you will be able to add:
– Mac OS Version
– Logic Pro X Version
– Audio Interface
You guys tell me if there’s anything else that should be included!
I honestly can’t say if the core midi error affects the result or not, but other than that it looks good! Your Mac Mini is a beast. We haven’t had any Mac Minis so high in the benchmark. iMacs started to catch up Mac Pros, but now even Mac Minis overtake them. 🙂
You’re welcome for the test!
Don’t worry for your English, it’s at least as good as mine. 🙂
I’m actually surprised about how powerful that Mac Mini is. I hadn’t seen it has scored so high on Geekbench. I guess your score is real!
But did you bypass the Space Designer finally or not? And did you get any messages that some IRs (or other files) could not be found?
I haven’t been able to reach the guy from Logic Pro Help yet, so I decided I would try to fix the test myself. Basically, the problem with the IRs that cannot be found should be solved now. I selected an option that includes them inside the project file, no matter if you have downloaded them before or not, so everyone should have them now.
Can you guys check this out for me? You can download the test here. I’ll be happy to hear if the problems still persist or not. 🙂 Thank you.
First, thanks for reporting this back. 🙂
It looks like there’s other people who had similar issues: https://www.gearslutz.com/board/apple-logic-pro/371545-logic-pro-multicore-benchmarktest-89.html#post10710606
Apparently some IRs were missing in that case, thus the person was able to reach a very high score. I think the problem is the project file of the benchmark test. Where did you download the test? The one that can be downloaded here is the original benchmark test and I believe it is dated by now, because some IRs are apparently not downloaded by default in Logic Pro X, hence, not everyone can run the test the same way.
Someone has found a new test on this site here: https://www.logicprohelp.com/forum/viewtopic.php?t=138612
There’s a discussion about a new test here: https://logicbenchmarks.com/forums/topic/does-the-test-need-an-update/#post-1191
I sent a message to the person on LogicProHelp if I can use their test, but haven’t received an answer yet.
The thing is, the test must be compatible with all new versions of Logic Pro, but shouldn’t make all the previous test results obsolete. If we end up with ten different benchmark tests, the comparison is impossible. So I believe the new test should not be too heavily modified, it should just correct the bugs from the old test with the most minimalistic approach. I don’t know if the LogicProHelp test does that, but maybe someone else can give their input if that test is suited fo replacement or not. 🙂
That looks good! I’ll get in touch with him and see if we can use this. 🙂 Thank you.
I remember I had that too, but somehow I fixed it. I don’t know if these settings were removed from the plugins or if their files were just located somewhere else because of a change in an update.
What do you guys recommend changing it to? It has got to be something similar to not make all the old test results obsolete. Will any setting from a same plugin use the same CPU power?
- This reply was modified 10 months, 1 week ago by Admin.
What would you change?