Win / TheDonald
Sign In
DEFAULT COMMUNITIES Front All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio, a mathematically simple path exists where the reverse can be performed and the original integer in the numerator may be found.

I can use my knowledge of integers to surmise that some ratio exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest and largest 'k'-digit number that will equal the specific-digit answer 'T' are defined as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round6(3.1434445) = 3.143445

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round6(3.1444444) = 3.144444

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio, a mathematically simple path exists where the reverse can be performed and the original integer in the numerator may be found.

I can use my knowledge of integers to surmise that some ratio exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest and largest 'k'-digit number that will equal the specific-digit answer 'T' are defined as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round5(3.1434445) = 3.143445

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round5(3.1444444) = 3.144444

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio, a mathematically simple path exists where the reverse can be performed and the original integer in the numerator may be found.

I can use my knowledge of integers to surmise that some ratio exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio.

I can use my knowledge of integers to surmise that some ratio exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio, I can use my knowledge of integers to surmise that some ratio 'j' exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex. round5(3.143445) = 3.14345

  round4(3.14345) = 3.1435

  round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex. round5(3.144444) = 3.14444

  round4(3.14444) = 3.1444

  round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: None provided.

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison provides an out: A raw data total with equivalent or more significant digits to work with.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex. round5(3.143445) = 3.14345

  round4(3.14345) = 3.1435

  round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex. round5(3.144444) = 3.14444

  round4(3.14444) = 3.1444

  round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score
Reason: Original

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison provides an out: A raw data total with equivalent or more significant digits to work with.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = xn b = raw Trump vote = yn c = raw remaining vote = z*n a+b+c = n x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000x y' = 1000y z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u y'' = y+v z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range: d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest 'k'-digit number that will equal the 3-digit answer 'T' goes as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex. round5(3.143445) = 3.14345 round4(3.14345) = 3.1435 round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex. round5(3.144444) = 3.14444 round4(3.14444) = 3.1444 round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

107 days ago
1 score