1
ReignOfTyphon 1 point ago +1 / -0

That time interval that you are referring to isn't the one being highlighted...those timestamps have a positive change from 3.013 million to 3.104 million...notice the spot on the graph where the blue line has a negative parabolic shape and the orange line has a positive parabolic shape at precisely the same point, this occurs at the highlighted timestamp in the csv file. The difference from the previous frame at the highlighted point is -17k for Trump and it is +16k for Biden. The cross-over is the switch, with a buffer for normal growth metrics.

The time intervals that you are pointing at aren't the same as the one I was pointing, which incidentally was translated wrong above because it isn't 9:30 AM it is 11:14 PM.

1
ReignOfTyphon 1 point ago +1 / -0 (edited)

OK, I see... the spot highlighted points out a point in the data where simultaneously over 16k votes were transferred from Trump to Biden, at least that is what is indicated by the delta columns on the far right of the images.

The change isn't precise but it is a significant apparent 'glitch' in the data, that corresponds to a time stamp of 11:30 PM on Nov. 4 (The timestamp I used in the post is wrong.)

This graphical signature doesn't happen a lot in the data, and there was greater individual vote loses all around, but the function of a program like Hammer and Scorecard is to switch votes from each candidate and this is precisely what we see here at this point.

For a better interpretation of the graph, the blue line maps the change in the Trump vote count and the orange maps the change in the Biden vote count.

1
ReignOfTyphon 1 point ago +1 / -0

I'm a little confused about the question. Here is the original data in CSV form https://gofile.io/d/lF8PpL .

I am about to post a more updated version on this website for PA and every other state.

2
ReignOfTyphon 2 points ago +2 / -0

This is a direct raw data source from the Edison aggregator and I can't image who would have the authority there to change those values...that leaves the source voting machine or its interface with the data company as the place where a discrepancy emerges.

The Dominion system is the machine that automatically scans the ballots and tallies the vote totals and mistakes by staff or otherwise is not going to result in a vote total changing after the reporting interval but before.

Once it is reported how do they retract data in the report, and who has the authority in the Dominion or local poll worker staff to do that?

The chain of custody is easy to find out, but the real question to ask is who has the proper chain of command to correct 'mistakes' in that custody and how do we prevent mistakes from becoming abuses?

10
ReignOfTyphon 10 points ago +10 / -0

I took the original ratios and found the best-fit extra digits to derive an integer number of partisan vote totals for either candidate...the rounded 3-digit ratios provided always produce a fractional result when multiplied by the original raw total, which is ridiculous because there are no fractional votes.

I used an algorithm to find a partisan vote total and associated ratio with expanded digits in order to fit the significant figures provided in the raw vote totals. The way I relate it is through a simple example:

13/17 = 0.76470588235294117647058823529412...

Which is reduced to 0.765 when rounded to 3-digits.

If I know what 17 is and I start with this number 0.765 representing the ratio that is formed with 17 to produce an unknown integer less than 17 and greater than 0, I can use a best-fit algorithm to look at numbers surrounding 0.765 to find the original ratio 13/17 to a greater precision and the likely integer placed in the original numerator 13.

This is easy to see when you look at the result 0.765*17 = 13.005 which hints at where the result will point towards.

The interval of numbers used to find the true value pair is bounded by the smallest and largest 'k'-digit floating point numbers that round to the 3-digit value of 0.765 .

The more significant figures that the original ratio denominator has, which is always the raw total provided, the more precise one can get the missing values due to the rules governing rational numbers and integers.

1
ReignOfTyphon 1 point ago +1 / -0

I think that is probably a good assumption to make here, after all I don't think that any of the challenged or flagged ballots in the initial counting process were included in the vote total because that wouldn't make any sense.

I can't find a good reason why any numbers in this simple digital ballot counter would drop after the ballot was counted, unless the intention was a specific design feature of the system itself; I should say, the exploit by a third party was probably a design feature of Dominion systems, allowing for a liability-free avenue of escape from prosecution beyond their poor system security in the event that this voter fraud phenomenon is confronted.

They also seemed to use third party candidates to buffer any wide variances that could emerge in these change intervals, because their percentages don't change as drastically throughout.

2
ReignOfTyphon 2 points ago +2 / -0

Be careful not to confuse this method with a form of analysis capable of achieving impossible feats; this method cannot provide more resolution on the data available, and the overall number of significant figures are going to stay the same throughout.

All this process is doing is figuring out the right integer-ratio pairs that fit the truncated initial data and the highly specific raw voter totals...the maximum number of significant figures in the calculation is limited by the raw voter total.

If I have a number like 1,938,439 I have a total of 7 significant numbers, which means a supplemental truncated ratio of 0.372 can be expanded by a few more digits in order to find the best-fit integer that derives that ratio with 1,938,439 in the denominator.

If I have a number like 2,500,000 I now only have a total of 2 significant figures so the rational relationship might become more challenging due to a lack of specificity that a 3-digit truncated ratio can provide.

5
ReignOfTyphon 5 points ago +5 / -0

As long as the significant figures for the raw vote total is relatively large this method will be able to derive the best-fit original ratios that reproduce an integer result from the calculation of the partisan vote totals!

It was actually a lot easier than I thought it would be because none of the raw vote totals seemed to have trailing zeros or simple factorizations.

Amazingly I think this was one of the primary reasons why they rounded the ratios to 3 digits, or why they didn't just provide the raw partisan voting totals themselves in the AP timeseries data; the pressumed imprecision of the allows for a two-way voting blind spot between the voting machine's raw output, which incidentally had no security, and the reporting agencies like AP and the NYT.

The very fact that Edison even exists is an anomaly in at of itself because the data isn't that hard to interpret or understand. If Dominion and Edison are partners in any of this apparent criminal activity, it stands to reason that they would attempt to hide it in the reporting phase.

10
ReignOfTyphon 10 points ago +10 / -0

I only used basic knowledge about the relationship between integers and finding the best-fit values relating the rounded percentages provided and the raw total with a large number of significant figures.

I was thinking about reaching out though because I know that people were a bit worried over the past few days when people started to notice the percentage error in the json timeseries data.

I just want to see where I can help out on this website with any skills or resources that I have at my disposal.

9
ReignOfTyphon 9 points ago +9 / -0

No problem...This crap sickens me to my core and I want to get to the bottom of this before that CCP apologist Biden gets inaugurated on based on a stolen election.

3
ReignOfTyphon 3 points ago +3 / -0

I'm going to do one for every raw json file that the AP and NYT provides a source for, which should be every state and Puerto Rico if I'm not mistaken.

Writing the executable for scanning the json files isn't so hard because, believe it or not, I was writing a similar program to analyze json-like files earlier this summer and it shouldn't be that difficult to modify the code.

It should be done within the next few days, depending upon my schedule tomorrow, and I will provide a source download link so that anyone can modify the Edison data to produce a more robust timeseries analysis.

I usual program in C++ and java but I like Matlab because it has some really powerful data analysis tools that I want to explore further on this subject.

3
ReignOfTyphon 3 points ago +3 / -0 (edited)

It just relies on the special quality of relatively big integer ratios having unique rational values that no digit contraction like rounding can preserve...the same way that we know what 'pi' is without ever identifying the exact value at any one time, most people say 3.14 even though there are infinitely more digits necessary in order to calculate natural quantities with it.

The ratio 5/7 is a repeating decimal 0.71428571428... that will be easily reproduced as long as enough of the digits of this decimal are known. Ideally you will never know for sure given only 3 significant digits of the ratio to work with, but the Edison data provides an out: A raw data total with equivalent or more significant digits to work with. In the 5/7 example, if we already know what 7 is, which in this context the total number of partitions in the set, and we only have 0.714 to work with as a reduced ratio, a mathematically simple path exists where the reverse can be performed and the original integer in the numerator may be found.

I can use my knowledge of integers to surmise that some ratio exists that is more precise than 0.714 that doesn't have any fractional part when multiplied by 7. Notice the quantity 0.714*7 = 4.998 : What integer uniquely exists that expands the reduced ratio 0.714 upon division by 7, and is smaller than 7 and greater than 0, that transforms the result 4.998 into another integer? In this example the answer is obviously 5, but this principle can be applied cautiously to greater values in the denominator and larger precision deficits as long as the prime factorization of the denominator is well preserved between the 2 provided and 1 derived ratios in the timeseries dataset.

We know that the final raw total 'n', the Biden ratio 'x', the Trump ratio 'y', and the remaining ratio 'z' all have the following exact relationships:

a = raw Biden vote = x*n

b = raw Trump vote = y*n

c = raw remaining vote = z*n

a+b+c = n

x+y+z = 1

We also know that the raw Biden vote, the raw Trump vote, the raw remainder, and the raw totals are all integers, meaning there is no such thing as a fractional vote.

Multiply the 3-digit numbers given by 1000 and one also gets integers.

x' = 1000*x

y' = 1000*y

z' = 1000*z

1000 is also an integer. The reason why this is important is that, like how 3.14 is not equal to 'pi' but you know what is implied by the context of its use in pi's place, in this context we are looking for a number that matches or exceeds the total available significant figures necessary to make x, y, and z produce an exact integer ratio with 'n' as the common denominator after the following transformation is complete:

x'' = x+u

y'' = y+v

z'' = z+w

When a rounding function is applied to a number, a bounding inclusive interval exists that represents all possible inputs that can generate the same 3-digit number:

round3(d) = 3.142

Range:

d = {3.1420000001 , 3.1415 , 3.14159, pi , ...}

where

round3(t) = a 3-digit rounding function

The rule that I followed when generating the arrays is that the smallest and largest 'k'-digit number that will equal the specific-digit answer 'T' are defined as follows:

The lower interval for the 'k'-digit number is a digit less than the 3rd digit in the 3-digit sequence, followed by a trailing sequence of 4's until the last digit which is always a 5:

Ex.

round6(3.1434445) = 3.143445

round5(3.143445) = 3.14345

round4(3.14345) = 3.1435

round3(3.1435) = 3.144

The upper interval for the 'k'-digit number is a trailing sequence of 4's directly following the 3-digit sequence:

Ex.

round6(3.1444444) = 3.144444

round5(3.144444) = 3.14444

round4(3.14444) = 3.1444

round3(3.1444) = 3.144

Above the numbers u, v, and w are also integer ratios and they are the quasi-unique floating point ratios that add to the 3-digit ratios, bounded by the outlined rule above, resulting in the ratios x'', y'', and z'' each sharing a common denominator of the raw voter total 'n' and generates an integer result.

The ideal situation would be to graph these generated arrays spanning the two extremes of this bounding interval that commonly round to generate the ratios x , y, and z, in order to look for repeating and local extrema in the data sets after the fractional part is separated out in order to make sure other values aren't more suitable than the minima and maxima, but because processing that much data is a highly processor-intensive analysis the best degree of certainty something like this is going to be able to produce most of the time is within a margin of a few hundred votes when the vote total is in the millions. After this point uniqueness is lost because multiple ratios start pointing to more disparate places, which is unusable until further reductions are made in the number of assumptions.

The theory that this procedure relies upon is a variant of the least common denominator theorem with the addition of the implicit properties of integers and how they behave.

This is only an attempt to opine about the original ratios, like finding out what math principles point towards a unique number 'pi' that has a commonly reductive representation of 3.14 .

I hope this helps though, because the anomalies pointed out in the original article on this site that number in the hundreds of thousands of lost votes cannot be just an artifact of roundoff errors unless they were subtly aggravated in the post hoc Edison timeseries.

view more: ‹ Prev Next ›