Quote:
Originally Posted by DeathsSilkyMist
[You must be logged in to view images. Log in or Register.]
I did explain what I was expecting already earlier. I've asked you before in other threads to read the thread before making claims like these.
|
I guess I should be more clear. I'm talking about making a single specific prediction ahead of each experiment. For example, you just did an experiment with overcapped AC with and without a shield. What's the hypothesis that experiment was answering?
One hypothesis would be "There's gonna be a difference somewhere when you use a shield". Another could be "The average damage per hit will be lower with a shield". Or "adding 12 shield AC will be equivalent to adding 2 AC to an otherwise capped toon".
This has several benefits. For one, it gives any results that match your expectations greater credibility. Kind of like "calling your shot". It also forces you to think through exactly what question you're trying to answer, and making sure whatever experiment you run will help answer that question. Spending a moment on experiment design can help avoid wasting time on experiments that give inconclusive results. It can also help by forcing you to spend time thinking about what you're trying to measure and what metrics do you want to calculate. For example, I was looking at the ratio of min-hit to max-hit, while you seem to be more interested in total damage or damage per hit.
So what I'm suggesting is a practice that I think leads to good mental self discipline and better designed experiments that lead to easier analysis and more defensible conclusions. It's at the heart of the cycle of the scientific method:
* First you do exploratory experiments. You cannot draw any conclusions from these, but you can generate interesting questions and hypotheses
* Next you generate a testable hypothesis. This is a specific prediction that can be either confirmed or rejected, something measureable
* Next you design an experiment to test the hypothesis. Part of this (in our case) will be determining how many samples to parse for each side, and what metric to calculate.
After all that then you can run the experiment and report the results. Now, I'm not saying you have to go through all that process. But I do think taking steps towards that ideal will be helpful and productive. Just a simple "call your shot" before running an experiment. For example: "I'm going to measure average damage per hit with 178 AC, with and without a 12 AC shield. I'll take 1000+ hits per side. I expect the damage per hit to be lower with the 12 AC shield. This is because x, y z."
Another example: the evidence suggests that at level 5 there's a 45ac softcap, and a shield provides a couple AC above that. what's the softcap at level 6? One hypothesis is that the formula is 4*level+25 which would suggest a 49 ac softcap. So there's a couple experiments that you could run to test that hypothesis. And if shield AC provides some bonus with some multiplier value like 0.2 you can run some experiments to try to determine what that multiplier value is.
But to sum up:
Quote:
Originally Posted by DeathsSilkyMist
[You must be logged in to view images. Log in or Register.]
I did explain what I was expecting already earlier. I've asked you before in other threads to read the thread before making claims like these.
|
That wasn't a claim, that was a suggestion. I've read the damn thread, that's why I've been participating in it. If you're going to get defensive and make unwarranted personal attacks I'm going to take that as my cue to bow out of this thread and leave you to it, absent an apology.