Our members are dedicated to PASSION and PURPOSE without drama!

Why Hit & Run is absurd

Started by Bayes, December 22, 2012, 10:31:31 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Bayes

Quote from: Number Six on January 03, 2014, 01:26:51 PM
Bayes,

the true risk of ruin must be calculated by accounting for the deviation of the results of your actual wagers.


Agreed. There can be no risk of anything if no chips go on the table.


Quote


So, if you factor in virtual results into your risk of ruin, the risk can only increase because the deviation measurement would be incorrect, the longer it goes on like that the more woefully inaccurate it becomes.



There are several formulas for calculating the chance of ruin, but none of them use a prior deviation (which would be meaningless for games of independent trials anyway), so there is no possibility of corrupting the data. You could use a standard formula, but in that case you would need to determine your parameters empirically from past results.

XXVV

'determine your parameters empirically from past results'.

this is the essence of determining what I term bet characteristics and the knowledge and familiarity of the behavioral parameters of the bet are constructed from a statistically suitable size of empirical evidence.


Some might argue infinite data is required but I find 1000 bet samples is sufficient. This may require say 10,000 spins or more to be witnessed (has to be live genuine data) and is time consuming but the benefits are durable.


Thanks Bayes always for your observations. XXVV



Number Six

Quote from: Bayes on January 10, 2014, 10:06:50 AM
There are several formulas for calculating the chance of ruin, but none of them use a prior deviation (which would be meaningless for games of independent trials anyway), so there is no possibility of corrupting the data. You could use a standard formula, but in that case you would need to determine your parameters empirically from past results.

There are many formulas that can be used for calculating the risk based on the type of game. These apply to short term performance only and whether the aim is to earn or survive until you hit a lucky streak. In either criteria, the risk of ruin can factor in deviation, variance and utlimately volatility of the bankroll. The analysis provides an exact bet by bet chance of ruin. I'll point you to some references when I am once again compos mentis.

And out of interest, if deviation is meaningless in a random game, what does that say about RTM?

It may, in fact, be more effective to place bets according to the "virtual" chance of ruin as compared to the "virtual" regression. Would there be a difference? Who knows... it would certainly make the bet selection much simpler.

Bayes

Quote from: Number Six on January 10, 2014, 10:55:03 PM

And out of interest, if deviation is meaningless in a random game, what does that say about RTM?



I didn't say that deviation was meaningless per se, only that it would be meaningless to incorporate 'virtual' deviation into any analysis of gambler's ruin. From a purely mathematical point of view, it would be nonsense.


The problem with discussions about independent trials, gambler's fallacy etc is that the so-called "math boyz" never seem to give any credence to anything that can't be mathematical formulated. According to them, if something can't be predicted mathematically, it must be a fallacy.


Therefore, because "the wheel has no memory", every outcome has the exact same chance of any other, regardless of history. While this is true mathematically, we can also observe that under normal conditions (no bias), the outcomes are distributed in repeatable and somewhat predictable ways, but that doesn't mean that the wheel has a memory in the mathematical sense.


RTM is more pronounced the more random the outcomes are. A typical example is that of student's scores on a multiple choice test. The top 5% of students would be more likely to score worse and the bottom 5% would be more likely to score better if retested, because the 'extreme' scores are likely to be accounted for in SOME measure by luck and random variation. Now when the outcomes are entirely due to randomness (analogous to a student having no clue as to any of the answers, but simply guessing), the regression effect is stronger. If you think of your score as being a combination of LUCK + SKILL, the variation (regression) is accounted for by luck, not skill (which remains largely constant), so if skill is removed from the equation, you are left with only luck, and therefore a large measure of regression.

sqzbox

QuoteThe problem with discussions about independent trials, gambler's fallacy etc is that the so-called "math boyz" never seem to give any credence to anything that can't be mathematical formulated. According to them, if something can't be predicted mathematically, it must be a fallacy

Spencer-Brown argued that a fundamental flaw exists in our view of randomness.  To quote from an interesting article (URL below) -

QuoteClassical probability focuses on the 'atomic' level. For example, if we throw a six-sided die 100 times, we treat this as 100 independent events. To work out the likelihood of two successive results of '6', we combine these 'atomic' events.  But this is begging the question, according to Spencer-Brown. It assumes independence instead of proving it empirically. There is no reason in principle why a series cannot gradually become less and less biased at the 'atomic' level, but remain biased on the various higher 'molecular' levels for arbitrarily long spans

The paper I am quoting from, which I find immensely interesting, is "Probability in Decline" by Dean M. Brooks and can be found here - http://www.statlit.org/pdf/2010BrooksASA.pdf#page=1&zoom=auto,0,792

Now, don't think that this is "the Answer" - sadly, the decline is not large enough to overcome the house edge in any game of chance.  But it is an interesting view and one that supports Bayes' view of the attitude of the math boyz.

Personally I suspect that "an Answer" lies in the combination of what Spencer-Brown refers to as "molecular events" and RTM of these events.  Probably combined with a suitably constructed progression since the effect from a practical perspective is small - but real.  It might even be worth considering a discussion topic of "molecular events" so that we could all gain a better understanding of these phenomena.


Number Six

Quote from: Bayes on January 13, 2014, 09:03:18 AM

I didn't say that deviation was meaningless per se, only that it would be meaningless to incorporate 'virtual' deviation into any analysis of gambler's ruin. From a purely mathematical point of view, it would be nonsense.


I agree Bayes, that was kind of the point, even regarding virtual anything.

Virtual deviation in gambler's ruin is as pointless as virtual deviation in regression towards the mean, since the deviation is not actually measured against anything. It is only tracked from a certain point in time. That, to me, affords no benefit. The likelihood is, tests of the two methods would show similar results, though virtual ruin would make for a more simple bet selection i.e. you could just bet red according to the risk, rather than having to make difficult subjective decisions based on regression.